question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
52,714 | 52,734 | STL vector vs map erase | In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy? | See http://www.sgi.com/tech/stl/Map.html Map has the important property that inserting a new element into a map does not invalidate iterators that point to existing elements. Erasing an element from a map also does not invalidate any iterators, except, of course, for iterators that actually point to the element that is being erased. The reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this. | STL vector vs map erase In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy? | TITLE:
STL vector vs map erase
QUESTION:
In the STL almost all containers have an erase function. The question I have is in a vector, the erase function returns an iterator pointing to the next element in the vector. The map container does not do this. Instead it returns a void. Anyone know why there is this inconsistancy?
ANSWER:
See http://www.sgi.com/tech/stl/Map.html Map has the important property that inserting a new element into a map does not invalidate iterators that point to existing elements. Erasing an element from a map also does not invalidate any iterators, except, of course, for iterators that actually point to the element that is being erased. The reason for returning an iterator on erase is so that you can iterate over the list erasing elements as you go. If erasing an item doesn't invalidate existing iterators there is no need to do this. | [
"c++",
"stl"
] | 15 | 28 | 11,402 | 5 | 0 | 2008-09-09T19:37:44.477000 | 2008-09-09T19:46:32.537000 |
52,723 | 52,742 | What is the most efficient way to paginate my site when querying with SQL? | I am trying to paginate the results of an SQL query for use on a web page. The language and the database backend are PHP and SQLite. The code I'm using works something like this (page numbering starts at 0) http://example.com/table?page=0 page = request(page) per = 10 // results per page offset = page * per
// take one extra record so we know if a next link is needed resultset = query(select columns from table where conditions limit offset, per + 1)
if(page > 0) show a previous link if(count(resultset) > per) show a next link
unset(resultset[per])
display results Are there more efficient ways to do pagination than this? One problem that I can see with my current method is that I must store all 10 (or however many) results in memory before I start displaying them. I do this because PDO does not guarantee that the row count will be available. Is it more efficient to issue a COUNT(*) query to learn how many rows exist, then stream the results to the browser? Is this one of those "it depends on the size of your table, and whether the count(*) query requires a full table scan in the database backend", "do some profiling yourself" kind of questions? | i'd suggest just doing the count first. a count(primary key) is a very efficient query. | What is the most efficient way to paginate my site when querying with SQL? I am trying to paginate the results of an SQL query for use on a web page. The language and the database backend are PHP and SQLite. The code I'm using works something like this (page numbering starts at 0) http://example.com/table?page=0 page = request(page) per = 10 // results per page offset = page * per
// take one extra record so we know if a next link is needed resultset = query(select columns from table where conditions limit offset, per + 1)
if(page > 0) show a previous link if(count(resultset) > per) show a next link
unset(resultset[per])
display results Are there more efficient ways to do pagination than this? One problem that I can see with my current method is that I must store all 10 (or however many) results in memory before I start displaying them. I do this because PDO does not guarantee that the row count will be available. Is it more efficient to issue a COUNT(*) query to learn how many rows exist, then stream the results to the browser? Is this one of those "it depends on the size of your table, and whether the count(*) query requires a full table scan in the database backend", "do some profiling yourself" kind of questions? | TITLE:
What is the most efficient way to paginate my site when querying with SQL?
QUESTION:
I am trying to paginate the results of an SQL query for use on a web page. The language and the database backend are PHP and SQLite. The code I'm using works something like this (page numbering starts at 0) http://example.com/table?page=0 page = request(page) per = 10 // results per page offset = page * per
// take one extra record so we know if a next link is needed resultset = query(select columns from table where conditions limit offset, per + 1)
if(page > 0) show a previous link if(count(resultset) > per) show a next link
unset(resultset[per])
display results Are there more efficient ways to do pagination than this? One problem that I can see with my current method is that I must store all 10 (or however many) results in memory before I start displaying them. I do this because PDO does not guarantee that the row count will be available. Is it more efficient to issue a COUNT(*) query to learn how many rows exist, then stream the results to the browser? Is this one of those "it depends on the size of your table, and whether the count(*) query requires a full table scan in the database backend", "do some profiling yourself" kind of questions?
ANSWER:
i'd suggest just doing the count first. a count(primary key) is a very efficient query. | [
"php",
"sql",
"sqlite",
"pdo"
] | 0 | 1 | 2,656 | 4 | 0 | 2008-09-09T19:40:41.897000 | 2008-09-09T19:50:59.437000 |
52,730 | 52,744 | Perform token replacements using VS post-build event command? | I would like to "post-process" my app.config file and perform some token replacements after the project builds. Is there an easy way to do this using a VS post-build event command? (Yeah I know I could probably use NAnt or something, looking for something simple.) | Take a look at XmlPreProcess. We use it for producing different config files for our testing and live deployment packages. We execute it from a nant script as part of a continuous build but, since it's a console app, I see no reason why you coudn't add a call in your project's post-build event instead | Perform token replacements using VS post-build event command? I would like to "post-process" my app.config file and perform some token replacements after the project builds. Is there an easy way to do this using a VS post-build event command? (Yeah I know I could probably use NAnt or something, looking for something simple.) | TITLE:
Perform token replacements using VS post-build event command?
QUESTION:
I would like to "post-process" my app.config file and perform some token replacements after the project builds. Is there an easy way to do this using a VS post-build event command? (Yeah I know I could probably use NAnt or something, looking for something simple.)
ANSWER:
Take a look at XmlPreProcess. We use it for producing different config files for our testing and live deployment packages. We execute it from a nant script as part of a continuous build but, since it's a console app, I see no reason why you coudn't add a call in your project's post-build event instead | [
".net",
"visual-studio-2008",
"visual-studio-2005"
] | 1 | 1 | 488 | 1 | 0 | 2008-09-09T19:45:03.617000 | 2008-09-09T19:51:10.903000 |
52,732 | 55,141 | How do I dynamically create a Video object in AS2 and add it to a MovieClip? | I need to dynamically create a Video object in ActionScript 2 and add it to a movie clip. In AS3 I just do this: var videoViewComp:UIComponent; // created elsewhere
videoView = new Video(); videoView.width = 400; videoView.height = 400; this.videoViewComp.addChild(videoView); Unfortunately, I can't figure out how to accomplish this in AS2. Video isn't a child of MovieClip, so attachMovie() doesn't seem to be getting me anything. I don't see any equivalent to AS3's UIComponent.addChild() method either. Is there any way to dynamically create a Video object in AS2 that actually shows up on the stage? I potentially need multiple videos at a time though. Is it possible to duplicate that video object? I think I have another solution working. It's not optimal, but it fits with some of the things I have to do for other components so it's not too out of place in the project. Once I get it figured out I'll post what I did here. | Ok, I've got something working. First, I created a new Library symbol and called it "VideoWrapper". I then added a single Video object to that with an ID of "video". Now, any time I need to dynamically add a Video to my state I can use MovieClip.attachMovie() to add a new copy of the Video object. To make things easier I wrote a VideoWrapper class that exposes basic UI element handling (setPosition(), setSize(), etc). So when dealing with the Video in regular UI layout code I just use those methods so it looks just like all my other UI elements. When dealing with the video I just access the "video" member of the class. My actual implementation is a bit more complicated, but that's the basics of how I got things working. I have a test app that's playing 2 videos, one from the local camera and one streaming from FMS, and it's working great. | How do I dynamically create a Video object in AS2 and add it to a MovieClip? I need to dynamically create a Video object in ActionScript 2 and add it to a movie clip. In AS3 I just do this: var videoViewComp:UIComponent; // created elsewhere
videoView = new Video(); videoView.width = 400; videoView.height = 400; this.videoViewComp.addChild(videoView); Unfortunately, I can't figure out how to accomplish this in AS2. Video isn't a child of MovieClip, so attachMovie() doesn't seem to be getting me anything. I don't see any equivalent to AS3's UIComponent.addChild() method either. Is there any way to dynamically create a Video object in AS2 that actually shows up on the stage? I potentially need multiple videos at a time though. Is it possible to duplicate that video object? I think I have another solution working. It's not optimal, but it fits with some of the things I have to do for other components so it's not too out of place in the project. Once I get it figured out I'll post what I did here. | TITLE:
How do I dynamically create a Video object in AS2 and add it to a MovieClip?
QUESTION:
I need to dynamically create a Video object in ActionScript 2 and add it to a movie clip. In AS3 I just do this: var videoViewComp:UIComponent; // created elsewhere
videoView = new Video(); videoView.width = 400; videoView.height = 400; this.videoViewComp.addChild(videoView); Unfortunately, I can't figure out how to accomplish this in AS2. Video isn't a child of MovieClip, so attachMovie() doesn't seem to be getting me anything. I don't see any equivalent to AS3's UIComponent.addChild() method either. Is there any way to dynamically create a Video object in AS2 that actually shows up on the stage? I potentially need multiple videos at a time though. Is it possible to duplicate that video object? I think I have another solution working. It's not optimal, but it fits with some of the things I have to do for other components so it's not too out of place in the project. Once I get it figured out I'll post what I did here.
ANSWER:
Ok, I've got something working. First, I created a new Library symbol and called it "VideoWrapper". I then added a single Video object to that with an ID of "video". Now, any time I need to dynamically add a Video to my state I can use MovieClip.attachMovie() to add a new copy of the Video object. To make things easier I wrote a VideoWrapper class that exposes basic UI element handling (setPosition(), setSize(), etc). So when dealing with the Video in regular UI layout code I just use those methods so it looks just like all my other UI elements. When dealing with the video I just access the "video" member of the class. My actual implementation is a bit more complicated, but that's the basics of how I got things working. I have a test app that's playing 2 videos, one from the local camera and one streaming from FMS, and it's working great. | [
"flash",
"video",
"actionscript-2"
] | 2 | 1 | 7,262 | 5 | 0 | 2008-09-09T19:45:11.697000 | 2008-09-10T19:51:08.540000 |
52,739 | 52,750 | Is there a way to infer what image format a file is, without reading the entire file? | Is there a good way to see what format an image is, without having to read the entire file into memory? Obviously this would vary from format to format (I'm particularly interested in TIFF files) but what sort of procedure would be useful to determine what kind of image format a file is without having to read through the entire file? BONUS: What if the image is a Base64-encoded string? Any reliable way to infer it before decoding it? | Most image file formats have unique bytes at the start. The unix file command looks at the start of the file to see what type of data it contains. See the Wikipedia article on Magic numbers in files and magicdb.org. | Is there a way to infer what image format a file is, without reading the entire file? Is there a good way to see what format an image is, without having to read the entire file into memory? Obviously this would vary from format to format (I'm particularly interested in TIFF files) but what sort of procedure would be useful to determine what kind of image format a file is without having to read through the entire file? BONUS: What if the image is a Base64-encoded string? Any reliable way to infer it before decoding it? | TITLE:
Is there a way to infer what image format a file is, without reading the entire file?
QUESTION:
Is there a good way to see what format an image is, without having to read the entire file into memory? Obviously this would vary from format to format (I'm particularly interested in TIFF files) but what sort of procedure would be useful to determine what kind of image format a file is without having to read through the entire file? BONUS: What if the image is a Base64-encoded string? Any reliable way to infer it before decoding it?
ANSWER:
Most image file formats have unique bytes at the start. The unix file command looks at the start of the file to see what type of data it contains. See the Wikipedia article on Magic numbers in files and magicdb.org. | [
"image",
"tiff"
] | 12 | 19 | 8,303 | 4 | 0 | 2008-09-09T19:49:48.300000 | 2008-09-09T19:52:34.970000 |
52,753 | 52,770 | Should I derive custom exceptions from Exception or ApplicationException in .NET? | What is best practice when creating your exception classes in a.NET solution: To derive from System.Exception or from System.ApplicationException? | According to Jeffery Richter in the Framework Design Guidelines book: System.ApplicationException is a class that should not be part of the.NET framework. It was intended to have some meaning in that you could potentially catch "all" the application exceptions, but the pattern was not followed and so it has no value. | Should I derive custom exceptions from Exception or ApplicationException in .NET? What is best practice when creating your exception classes in a.NET solution: To derive from System.Exception or from System.ApplicationException? | TITLE:
Should I derive custom exceptions from Exception or ApplicationException in .NET?
QUESTION:
What is best practice when creating your exception classes in a.NET solution: To derive from System.Exception or from System.ApplicationException?
ANSWER:
According to Jeffery Richter in the Framework Design Guidelines book: System.ApplicationException is a class that should not be part of the.NET framework. It was intended to have some meaning in that you could potentially catch "all" the application exceptions, but the pattern was not followed and so it has no value. | [
".net",
"exception"
] | 77 | 66 | 11,742 | 5 | 0 | 2008-09-09T19:53:21.210000 | 2008-09-09T20:02:36.180000 |
52,755 | 53,187 | What determines the monitor my app runs on? | I am using Windows, and I have two monitors. Some applications will always start on my primary monitor, no matter where they were when I closed them. Others will always start on the secondary monitor, no matter where they were when I closed them. Is there a registry setting buried somewhere, which I can manipulate to control which monitor applications launch into by default? @rp: I have Ultramon, and I agree that it is indispensable, to the point that Microsoft should buy it and incorporate it into their OS. But as you said, it doesn't let you control the default monitor a program launches into. | Correctly written Windows apps that want to save their location from run to run will save the results of GetWindowPlacement() before shutting down, then use SetWindowPlacement() on startup to restore their position. Frequently, apps will store the results of GetWindowPlacement() in the registry as a REG_BINARY for easy use. The WINDOWPLACEMENT route has many advantages over other methods: Handles the case where the screen resolution changed since the last run: SetWindowPlacement() will automatically ensure that the window is not entirely offscreen Saves the state (minimized/maximized) but also saves the restored (normal) size and position Handles desktop metrics correctly, compensating for the taskbar position, etc. (i.e. uses "workspace coordinates" instead of "screen coordinates" -- techniques that rely on saving screen coordinates may suffer from the "walking windows" problem where a window will always appear a little lower each time if the user has a toolbar at the top of the screen). Finally, programs that handle window restoration properly will take into account the nCmdShow parameter passed in from the shell. This parameter is set in the shortcut that launches the application (Normal, Minimized, Maximize): if(nCmdShow!= SW_SHOWNORMAL) placement.showCmd = nCmdShow; //allow shortcut to override For non-Win32 applications, it's important to be sure that the method you're using to save/restore window position eventually uses the same underlying call, otherwise (like Java Swing's setBounds() / getBounds() problem) you'll end up writing a lot of extra code to re-implement functionality that's already there in the WINDOWPLACEMENT functions. | What determines the monitor my app runs on? I am using Windows, and I have two monitors. Some applications will always start on my primary monitor, no matter where they were when I closed them. Others will always start on the secondary monitor, no matter where they were when I closed them. Is there a registry setting buried somewhere, which I can manipulate to control which monitor applications launch into by default? @rp: I have Ultramon, and I agree that it is indispensable, to the point that Microsoft should buy it and incorporate it into their OS. But as you said, it doesn't let you control the default monitor a program launches into. | TITLE:
What determines the monitor my app runs on?
QUESTION:
I am using Windows, and I have two monitors. Some applications will always start on my primary monitor, no matter where they were when I closed them. Others will always start on the secondary monitor, no matter where they were when I closed them. Is there a registry setting buried somewhere, which I can manipulate to control which monitor applications launch into by default? @rp: I have Ultramon, and I agree that it is indispensable, to the point that Microsoft should buy it and incorporate it into their OS. But as you said, it doesn't let you control the default monitor a program launches into.
ANSWER:
Correctly written Windows apps that want to save their location from run to run will save the results of GetWindowPlacement() before shutting down, then use SetWindowPlacement() on startup to restore their position. Frequently, apps will store the results of GetWindowPlacement() in the registry as a REG_BINARY for easy use. The WINDOWPLACEMENT route has many advantages over other methods: Handles the case where the screen resolution changed since the last run: SetWindowPlacement() will automatically ensure that the window is not entirely offscreen Saves the state (minimized/maximized) but also saves the restored (normal) size and position Handles desktop metrics correctly, compensating for the taskbar position, etc. (i.e. uses "workspace coordinates" instead of "screen coordinates" -- techniques that rely on saving screen coordinates may suffer from the "walking windows" problem where a window will always appear a little lower each time if the user has a toolbar at the top of the screen). Finally, programs that handle window restoration properly will take into account the nCmdShow parameter passed in from the shell. This parameter is set in the shortcut that launches the application (Normal, Minimized, Maximize): if(nCmdShow!= SW_SHOWNORMAL) placement.showCmd = nCmdShow; //allow shortcut to override For non-Win32 applications, it's important to be sure that the method you're using to save/restore window position eventually uses the same underlying call, otherwise (like Java Swing's setBounds() / getBounds() problem) you'll end up writing a lot of extra code to re-implement functionality that's already there in the WINDOWPLACEMENT functions. | [
"multiple-monitors"
] | 79 | 46 | 186,917 | 11 | 0 | 2008-09-09T19:54:16.277000 | 2008-09-09T23:56:07.833000 |
52,778 | 53,789 | How do you make a build that includes only one of many pending changes? | In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less. And of course I have my own machine, with dozens of files in an "in-progress" state. Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change. But of course I can't commit the change to the repository until it's tested. Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases? @Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing? | So you are asking how to handle working on multiple "tasks" at once, right? Except branching. You can have multiple checkouts of the source on the local machine, suffixing the directory name with the name of the ticket you are working on. Just make sure to make changes in the right directory, depending on the task... Mixing multiple tasks in one working copy / commit can get very confusing, especially if somebody needs to review your work later. | How do you make a build that includes only one of many pending changes? In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less. And of course I have my own machine, with dozens of files in an "in-progress" state. Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change. But of course I can't commit the change to the repository until it's tested. Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases? @Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing? | TITLE:
How do you make a build that includes only one of many pending changes?
QUESTION:
In my current environment, we have a "clean" build machine, which has an exact copy of all committed changes, nothing more, nothing less. And of course I have my own machine, with dozens of files in an "in-progress" state. Often I need to build my application with only one change in place. For example, I've finished task ABC, and I want to build an EXE with only that change. But of course I can't commit the change to the repository until it's tested. Branching seems like overkill for this. What do you do in your environment to isolate changes for test builds and releases? @Matt b: So while you wait for feedback on your change, what do you do? Are you always working on exactly one thing?
ANSWER:
So you are asking how to handle working on multiple "tasks" at once, right? Except branching. You can have multiple checkouts of the source on the local machine, suffixing the directory name with the name of the ticket you are working on. Just make sure to make changes in the right directory, depending on the task... Mixing multiple tasks in one working copy / commit can get very confusing, especially if somebody needs to review your work later. | [
"version-control",
"testing",
"build-process"
] | 1 | 2 | 156 | 4 | 0 | 2008-09-09T20:04:54.927000 | 2008-09-10T11:19:21.140000 |
52,785 | 52,805 | How do I add a div to DOM and pick it up later | I think this is specific to IE 6.0 but... In JavaScript I add a div to the DOM. I assign an id attribute. When I later try to pick up the div by the id all I get is null. Any suggestions? Example: var newDiv = document.createElement("DIV"); newDiv.setAttribute("ID", "obj_1000"); document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") ); Alert prints "::null" Seems to work fine in Firefox 2.0+ | In addition to what the other answers suggest (that you need to actually insert the element into the DOM for it to be found via getElementById() ), you also need to use a lower-case attribute name in order for IE6 to recognize it as the id: var newDiv = document.createElement("DIV"); newDiv.setAttribute("id", "obj_1000"); document.body.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("id") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") );...responds as expected: Added:obj_1000:obj_1000:[object] According to the MSDN documentation for setAttribute(), up to IE8 there is an optional third parameter that controls whether or not it is case sensitive with regard to the attribute name. Guess what the default is... | How do I add a div to DOM and pick it up later I think this is specific to IE 6.0 but... In JavaScript I add a div to the DOM. I assign an id attribute. When I later try to pick up the div by the id all I get is null. Any suggestions? Example: var newDiv = document.createElement("DIV"); newDiv.setAttribute("ID", "obj_1000"); document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") ); Alert prints "::null" Seems to work fine in Firefox 2.0+ | TITLE:
How do I add a div to DOM and pick it up later
QUESTION:
I think this is specific to IE 6.0 but... In JavaScript I add a div to the DOM. I assign an id attribute. When I later try to pick up the div by the id all I get is null. Any suggestions? Example: var newDiv = document.createElement("DIV"); newDiv.setAttribute("ID", "obj_1000"); document.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("ID") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") ); Alert prints "::null" Seems to work fine in Firefox 2.0+
ANSWER:
In addition to what the other answers suggest (that you need to actually insert the element into the DOM for it to be found via getElementById() ), you also need to use a lower-case attribute name in order for IE6 to recognize it as the id: var newDiv = document.createElement("DIV"); newDiv.setAttribute("id", "obj_1000"); document.body.appendChild(newDiv);
alert("Added:" + newDiv.getAttribute("id") + ":" + newDiv.id + ":" + document.getElementById("obj_1000") );...responds as expected: Added:obj_1000:obj_1000:[object] According to the MSDN documentation for setAttribute(), up to IE8 there is an optional third parameter that controls whether or not it is case sensitive with regard to the attribute name. Guess what the default is... | [
"javascript",
"css",
"dhtml"
] | 3 | 8 | 723 | 5 | 0 | 2008-09-09T20:07:56.463000 | 2008-09-09T20:17:49.433000 |
52,794 | 52,799 | Changing the Directory Structure in Subversion | How do I create a branch in subversion that is deeper' than just the 'branches' directory? I have the standard trunk, tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag. Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error. What do I need to do to create this? For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)? | svn copy --parents http://url/to/subproject http://url/to/repository/branches/version_1/project/subproject That should create the directory you want to put the subproject in ( --parents means "create the intermediate directories for me"). | Changing the Directory Structure in Subversion How do I create a branch in subversion that is deeper' than just the 'branches' directory? I have the standard trunk, tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag. Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error. What do I need to do to create this? For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)? | TITLE:
Changing the Directory Structure in Subversion
QUESTION:
How do I create a branch in subversion that is deeper' than just the 'branches' directory? I have the standard trunk, tags and branches structure and I want to create a branch that is several directories deeper than the 'branches' tag. Using the standard svn move method, it gives me a folder not found error. I also tried copying it into the branches folder, checked it out, and the 'svn move' it into the tree structure I wanted, but also got a 'working copy admin area is missing' error. What do I need to do to create this? For the sake of illustration, let us suppose I want to create a branch to go directly into 'branches/version_1/project/subproject' (which does not exist yet)?
ANSWER:
svn copy --parents http://url/to/subproject http://url/to/repository/branches/version_1/project/subproject That should create the directory you want to put the subproject in ( --parents means "create the intermediate directories for me"). | [
"svn",
"branch",
"trunk"
] | 6 | 14 | 2,731 | 5 | 0 | 2008-09-09T20:11:21.743000 | 2008-09-09T20:13:06.263000 |
52,797 | 283,917 | How do I get the path of the assembly the code is in? | Is there a way to get the path for the assembly in which the current code resides? I do not want the path of the calling assembly, just the one containing the code. Basically my unit test needs to read some xml test files which are located relative to the dll. I want the path to always resolve correctly regardless of whether the testing dll is run from TestDriven.NET, the MbUnit GUI or something else. Edit: People seem to be misunderstanding what I'm asking. My test library is located in say C:\projects\myapplication\daotests\bin\Debug\daotests.dll and I would like to get this path: C:\projects\myapplication\daotests\bin\Debug\ The three suggestions so far fail me when I run from the MbUnit Gui: Environment.CurrentDirectory gives c:\Program Files\MbUnit System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location gives C:\Documents and Settings\george\Local Settings\Temp\....\DaoTests.dll System.Reflection.Assembly.GetExecutingAssembly().Location gives the same as the previous. | Note: Assembly.CodeBase is deprecated in.NET Core/.NET 5+: https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly.codebase?view=net-5.0 Original answer: I've defined the following property as we use this often in unit testing. public static string AssemblyDirectory { get { string codeBase = Assembly.GetExecutingAssembly().CodeBase; UriBuilder uri = new UriBuilder(codeBase); string path = Uri.UnescapeDataString(uri.Path); return Path.GetDirectoryName(path); } } The Assembly.Location property sometimes gives you some funny results when using NUnit (where assemblies run from a temporary folder), so I prefer to use CodeBase which gives you the path in URI format, then UriBuild.UnescapeDataString removes the File:// at the beginning, and GetDirectoryName changes it to the normal windows format. | How do I get the path of the assembly the code is in? Is there a way to get the path for the assembly in which the current code resides? I do not want the path of the calling assembly, just the one containing the code. Basically my unit test needs to read some xml test files which are located relative to the dll. I want the path to always resolve correctly regardless of whether the testing dll is run from TestDriven.NET, the MbUnit GUI or something else. Edit: People seem to be misunderstanding what I'm asking. My test library is located in say C:\projects\myapplication\daotests\bin\Debug\daotests.dll and I would like to get this path: C:\projects\myapplication\daotests\bin\Debug\ The three suggestions so far fail me when I run from the MbUnit Gui: Environment.CurrentDirectory gives c:\Program Files\MbUnit System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location gives C:\Documents and Settings\george\Local Settings\Temp\....\DaoTests.dll System.Reflection.Assembly.GetExecutingAssembly().Location gives the same as the previous. | TITLE:
How do I get the path of the assembly the code is in?
QUESTION:
Is there a way to get the path for the assembly in which the current code resides? I do not want the path of the calling assembly, just the one containing the code. Basically my unit test needs to read some xml test files which are located relative to the dll. I want the path to always resolve correctly regardless of whether the testing dll is run from TestDriven.NET, the MbUnit GUI or something else. Edit: People seem to be misunderstanding what I'm asking. My test library is located in say C:\projects\myapplication\daotests\bin\Debug\daotests.dll and I would like to get this path: C:\projects\myapplication\daotests\bin\Debug\ The three suggestions so far fail me when I run from the MbUnit Gui: Environment.CurrentDirectory gives c:\Program Files\MbUnit System.Reflection.Assembly.GetAssembly(typeof(DaoTests)).Location gives C:\Documents and Settings\george\Local Settings\Temp\....\DaoTests.dll System.Reflection.Assembly.GetExecutingAssembly().Location gives the same as the previous.
ANSWER:
Note: Assembly.CodeBase is deprecated in.NET Core/.NET 5+: https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly.codebase?view=net-5.0 Original answer: I've defined the following property as we use this often in unit testing. public static string AssemblyDirectory { get { string codeBase = Assembly.GetExecutingAssembly().CodeBase; UriBuilder uri = new UriBuilder(codeBase); string path = Uri.UnescapeDataString(uri.Path); return Path.GetDirectoryName(path); } } The Assembly.Location property sometimes gives you some funny results when using NUnit (where assemblies run from a temporary folder), so I prefer to use CodeBase which gives you the path in URI format, then UriBuild.UnescapeDataString removes the File:// at the beginning, and GetDirectoryName changes it to the normal windows format. | [
"c#",
".net",
"reflection",
"mbunit"
] | 939 | 1,176 | 797,738 | 31 | 0 | 2008-09-09T20:12:28.340000 | 2008-11-12T13:24:56.133000 |
52,806 | 52,848 | construct a complex SQL query (or queries) | As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments. An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments. Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus ( Model->find(), etc.), but I'm not sanguine about this. Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work... | Heh, I was just about to come back with essentially the same answer (using Cake's Model::find): $this->loadModel('Comment');
$this->Comment->find( 'all', array( 'fields' => array('COUNT(Comment.id) AS popularCount'), 'conditions' => array( 'Comment.created >' => strtotime('-1 month') ), 'group' => 'Comment.blog_post_id', 'order' => 'popularCount DESC',
'contain' => array( 'Entry' => array( 'fields' => array( 'Entry.title' ) ) ) )); It's not perfect, but it works and can be improved on. I made an additional improvement, using the Containable behaviour to extract the Entry data instead of the Comment data. | construct a complex SQL query (or queries) As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments. An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments. Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus ( Model->find(), etc.), but I'm not sanguine about this. Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work... | TITLE:
construct a complex SQL query (or queries)
QUESTION:
As part of a larger web-app (using CakePHP), I'm putting together a simple blog system. The relationships are exceedingly simple: each User has a Blog, which has many Entries, which have many Comments. An element I'd like to incorporate is a list of "Popular Entries." Popular Entries have been defined as those with the most Comments in the last month, and ultimately they need to be ordered by the number of recent Comments. Ideally, I'd like the solution to stay within Cake's Model data-retrieval apparatus ( Model->find(), etc.), but I'm not sanguine about this. Anyone have a clever/elegant solution? I'm steeling myself for some wild SQL hacking to make this work...
ANSWER:
Heh, I was just about to come back with essentially the same answer (using Cake's Model::find): $this->loadModel('Comment');
$this->Comment->find( 'all', array( 'fields' => array('COUNT(Comment.id) AS popularCount'), 'conditions' => array( 'Comment.created >' => strtotime('-1 month') ), 'group' => 'Comment.blog_post_id', 'order' => 'popularCount DESC',
'contain' => array( 'Entry' => array( 'fields' => array( 'Entry.title' ) ) ) )); It's not perfect, but it works and can be improved on. I made an additional improvement, using the Containable behaviour to extract the Entry data instead of the Comment data. | [
"php",
"cakephp"
] | 0 | 4 | 495 | 4 | 0 | 2008-09-09T20:18:07.563000 | 2008-09-09T20:35:30.157000 |
52,808 | 52,827 | .NET Production Debugging | I've had a Windows app in production for a while now, and have it set up to send us error reports when it throws exceptions. Most of these are fairly descriptive and help me find the problem very quickly (I use the MS Application Exception Block). On a few occasions I have reports that are issues that I can't reproduce, and seem to only happen on a few client machines. I don't have physical access to these client machines, what are some strategies I can use for debugging? Would it be better to build some tracing into the code, or are there some other alternatives? Thank you. Edit: I should have been more clear: The error reports that I get do have the stack trace, but since it's production code, it doesn't indicate the exact line that caused the exception, just the method in which it was thrown. | One option is to generate a (mini-)dump file as close to the point where the exception is thrown as possible. This article talks about how to do this from managed code. You could then load the dump file into Visual Studio or WinDbg and examine it with the aid of SOS | .NET Production Debugging I've had a Windows app in production for a while now, and have it set up to send us error reports when it throws exceptions. Most of these are fairly descriptive and help me find the problem very quickly (I use the MS Application Exception Block). On a few occasions I have reports that are issues that I can't reproduce, and seem to only happen on a few client machines. I don't have physical access to these client machines, what are some strategies I can use for debugging? Would it be better to build some tracing into the code, or are there some other alternatives? Thank you. Edit: I should have been more clear: The error reports that I get do have the stack trace, but since it's production code, it doesn't indicate the exact line that caused the exception, just the method in which it was thrown. | TITLE:
.NET Production Debugging
QUESTION:
I've had a Windows app in production for a while now, and have it set up to send us error reports when it throws exceptions. Most of these are fairly descriptive and help me find the problem very quickly (I use the MS Application Exception Block). On a few occasions I have reports that are issues that I can't reproduce, and seem to only happen on a few client machines. I don't have physical access to these client machines, what are some strategies I can use for debugging? Would it be better to build some tracing into the code, or are there some other alternatives? Thank you. Edit: I should have been more clear: The error reports that I get do have the stack trace, but since it's production code, it doesn't indicate the exact line that caused the exception, just the method in which it was thrown.
ANSWER:
One option is to generate a (mini-)dump file as close to the point where the exception is thrown as possible. This article talks about how to do this from managed code. You could then load the dump file into Visual Studio or WinDbg and examine it with the aid of SOS | [
".net",
"debugging"
] | 7 | 2 | 1,083 | 5 | 0 | 2008-09-09T20:19:37.090000 | 2008-09-09T20:28:27.617000 |
52,822 | 52,828 | How to import a DBF file in SQL Server | How can you import a foxpro DBF file in SQL Server? | Use a linked server or use openrowset, example SELECT * into SomeTable FROM OPENROWSET('MSDASQL', 'Driver=Microsoft Visual FoxPro Driver; SourceDB=\\SomeServer\SomePath\; SourceType=DBF', 'SELECT * FROM SomeDBF') | How to import a DBF file in SQL Server How can you import a foxpro DBF file in SQL Server? | TITLE:
How to import a DBF file in SQL Server
QUESTION:
How can you import a foxpro DBF file in SQL Server?
ANSWER:
Use a linked server or use openrowset, example SELECT * into SomeTable FROM OPENROWSET('MSDASQL', 'Driver=Microsoft Visual FoxPro Driver; SourceDB=\\SomeServer\SomePath\; SourceType=DBF', 'SELECT * FROM SomeDBF') | [
"sql-server",
"foxpro",
"dbf"
] | 18 | 19 | 104,985 | 5 | 0 | 2008-09-09T20:26:12.550000 | 2008-09-09T20:28:35.680000 |
52,824 | 52,909 | Cross-Branch merging in TFS? | Is it possible to merge to a branch that is not a direct parent or child in TFS? I suspect that the answer is no as this is what I've experienced while using it. However, it seems that at certain times it would be really useful when there are different features being worked on that may have different approval cycles (ie. feature one might be approved before feature two). This becomes exceedingly difficult when we have production branches where we have to merge some feature into a previous branch so we can release before the next full version. Our current branching strategy is to develop in the trunk (or mainline as we call it), and create a branch to stabilize and release to production. This branch can then be used to create hotfixes and other things while mainline can diverge for upcoming features. What techniques can be used otherwise to mitigate a scenario such as the one(s) described above? | I agree with Harpreet that you may want to revisit how you you have setup you branching structure. However you if you really want to perform this type of merge you can through something called a baseless merge. It runs from the tfs command prompt, Tf merge /baseless < > < > /recursive Additional info about baseless merges can be found here Also I found this document to be invaluable when constructing our tfs branching structure Microsoft Team Foundation Server Branching Guidance | Cross-Branch merging in TFS? Is it possible to merge to a branch that is not a direct parent or child in TFS? I suspect that the answer is no as this is what I've experienced while using it. However, it seems that at certain times it would be really useful when there are different features being worked on that may have different approval cycles (ie. feature one might be approved before feature two). This becomes exceedingly difficult when we have production branches where we have to merge some feature into a previous branch so we can release before the next full version. Our current branching strategy is to develop in the trunk (or mainline as we call it), and create a branch to stabilize and release to production. This branch can then be used to create hotfixes and other things while mainline can diverge for upcoming features. What techniques can be used otherwise to mitigate a scenario such as the one(s) described above? | TITLE:
Cross-Branch merging in TFS?
QUESTION:
Is it possible to merge to a branch that is not a direct parent or child in TFS? I suspect that the answer is no as this is what I've experienced while using it. However, it seems that at certain times it would be really useful when there are different features being worked on that may have different approval cycles (ie. feature one might be approved before feature two). This becomes exceedingly difficult when we have production branches where we have to merge some feature into a previous branch so we can release before the next full version. Our current branching strategy is to develop in the trunk (or mainline as we call it), and create a branch to stabilize and release to production. This branch can then be used to create hotfixes and other things while mainline can diverge for upcoming features. What techniques can be used otherwise to mitigate a scenario such as the one(s) described above?
ANSWER:
I agree with Harpreet that you may want to revisit how you you have setup you branching structure. However you if you really want to perform this type of merge you can through something called a baseless merge. It runs from the tfs command prompt, Tf merge /baseless < > < > /recursive Additional info about baseless merges can be found here Also I found this document to be invaluable when constructing our tfs branching structure Microsoft Team Foundation Server Branching Guidance | [
"version-control",
"tfs",
"merge",
"branch"
] | 19 | 19 | 23,247 | 7 | 0 | 2008-09-09T20:26:47.667000 | 2008-09-09T21:06:57.570000 |
52,830 | 52,849 | Inform potential clients about security vulnerabilities? | We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice? | I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this. | Inform potential clients about security vulnerabilities? We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice? | TITLE:
Inform potential clients about security vulnerabilities?
QUESTION:
We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice?
ANSWER:
I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this. | [
"xss",
"sql-injection"
] | 1 | 2 | 277 | 4 | 0 | 2008-09-09T20:29:15.133000 | 2008-09-09T20:36:32.823000 |
52,842 | 52,867 | Sorting Directory.GetFiles() | System.IO.Directory.GetFiles() returns a string[]. What is the default sort order for the returned values? I'm assuming by name, but if so how much does the current culture effect it? Can you change it to something like creation date? Update: MSDN points out that the sort order is not guaranteed for.Net 3.5, but the 2.0 version of the page doesn't say anything at all and neither page will help you sort by things like creation or modification time. That information is lost once you have the array (it contains only strings). I could build a comparer that would check for each file it gets, but that means accessing the file system repeatedly when presumably the.GetFiles() method already does this. Seems very inefficient. | If you're interested in properties of the files such as CreationTime, then it would make more sense to use System.IO.DirectoryInfo.GetFileSystemInfos(). You can then sort these using one of the extension methods in System.Linq, e.g.: DirectoryInfo di = new DirectoryInfo("C:\\"); FileSystemInfo[] files = di.GetFileSystemInfos(); var orderedFiles = files.OrderBy(f => f.CreationTime); Edit - sorry, I didn't notice the.NET2.0 tag so ignore the LINQ sorting. The suggestion to use System.IO.DirectoryInfo.GetFileSystemInfos() still holds though. | Sorting Directory.GetFiles() System.IO.Directory.GetFiles() returns a string[]. What is the default sort order for the returned values? I'm assuming by name, but if so how much does the current culture effect it? Can you change it to something like creation date? Update: MSDN points out that the sort order is not guaranteed for.Net 3.5, but the 2.0 version of the page doesn't say anything at all and neither page will help you sort by things like creation or modification time. That information is lost once you have the array (it contains only strings). I could build a comparer that would check for each file it gets, but that means accessing the file system repeatedly when presumably the.GetFiles() method already does this. Seems very inefficient. | TITLE:
Sorting Directory.GetFiles()
QUESTION:
System.IO.Directory.GetFiles() returns a string[]. What is the default sort order for the returned values? I'm assuming by name, but if so how much does the current culture effect it? Can you change it to something like creation date? Update: MSDN points out that the sort order is not guaranteed for.Net 3.5, but the 2.0 version of the page doesn't say anything at all and neither page will help you sort by things like creation or modification time. That information is lost once you have the array (it contains only strings). I could build a comparer that would check for each file it gets, but that means accessing the file system repeatedly when presumably the.GetFiles() method already does this. Seems very inefficient.
ANSWER:
If you're interested in properties of the files such as CreationTime, then it would make more sense to use System.IO.DirectoryInfo.GetFileSystemInfos(). You can then sort these using one of the extension methods in System.Linq, e.g.: DirectoryInfo di = new DirectoryInfo("C:\\"); FileSystemInfo[] files = di.GetFileSystemInfos(); var orderedFiles = files.OrderBy(f => f.CreationTime); Edit - sorry, I didn't notice the.NET2.0 tag so ignore the LINQ sorting. The suggestion to use System.IO.DirectoryInfo.GetFileSystemInfos() still holds though. | [
".net",
".net-2.0"
] | 69 | 114 | 163,954 | 13 | 0 | 2008-09-09T20:33:41.080000 | 2008-09-09T20:47:03.873000 |
52,844 | 64,273 | Hide directories in wxGenericDirCtrl | I am using a wxGenericDirCtrl, and I would like to know if there is a way to hide directories, I'd especially like to hide siblings of parent nodes. For example if my directory structure looks like this: +-a | +-b | | | +-whatever | +-c | | | +-d | | | +-e | | | +-f | +-g | +-whatever If my currently selected directory is /a/c/d is there any way to hide b and g, so that the tree looks like this in my ctrl: +-a | +-c | +-[d] | +-e | +-f I'm currently working with a directory structure that has lots and lots directories that are irrelevant to most users, so it would be nice to be able to clean it up. Edit: If it makes a difference, I am using wxPython, and so far, I have only tested my code on linux using the GTK backend, but I do plan to make it multi-platform and using it on Windows and Mac using the native backends. | Listing/walking directories in Python is very easy, so I would recommend trying to "roll your own" using one of the simple tree controls (such as TreeCtrl or CustomTreeCtrl). It should really be quite easy to call the directory listing code when some directory is expanded and return the result. | Hide directories in wxGenericDirCtrl I am using a wxGenericDirCtrl, and I would like to know if there is a way to hide directories, I'd especially like to hide siblings of parent nodes. For example if my directory structure looks like this: +-a | +-b | | | +-whatever | +-c | | | +-d | | | +-e | | | +-f | +-g | +-whatever If my currently selected directory is /a/c/d is there any way to hide b and g, so that the tree looks like this in my ctrl: +-a | +-c | +-[d] | +-e | +-f I'm currently working with a directory structure that has lots and lots directories that are irrelevant to most users, so it would be nice to be able to clean it up. Edit: If it makes a difference, I am using wxPython, and so far, I have only tested my code on linux using the GTK backend, but I do plan to make it multi-platform and using it on Windows and Mac using the native backends. | TITLE:
Hide directories in wxGenericDirCtrl
QUESTION:
I am using a wxGenericDirCtrl, and I would like to know if there is a way to hide directories, I'd especially like to hide siblings of parent nodes. For example if my directory structure looks like this: +-a | +-b | | | +-whatever | +-c | | | +-d | | | +-e | | | +-f | +-g | +-whatever If my currently selected directory is /a/c/d is there any way to hide b and g, so that the tree looks like this in my ctrl: +-a | +-c | +-[d] | +-e | +-f I'm currently working with a directory structure that has lots and lots directories that are irrelevant to most users, so it would be nice to be able to clean it up. Edit: If it makes a difference, I am using wxPython, and so far, I have only tested my code on linux using the GTK backend, but I do plan to make it multi-platform and using it on Windows and Mac using the native backends.
ANSWER:
Listing/walking directories in Python is very easy, so I would recommend trying to "roll your own" using one of the simple tree controls (such as TreeCtrl or CustomTreeCtrl). It should really be quite easy to call the directory listing code when some directory is expanded and return the result. | [
"wxpython",
"wxwidgets"
] | 1 | 1 | 703 | 2 | 0 | 2008-09-09T20:34:02.627000 | 2008-09-15T16:08:01.200000 |
52,873 | 52,892 | CSS overflow table row positioning | I have table inside a div tab. The table has 40 rows in it and the div's height is set to show 10 rows of that table. CSS's overflow:auto lets me scroll through the 40 rows. All is well there. How can I, with JavaScript cause the table to programatically position to a given row (i.e., programmatically scroll the table up or down by row)? | Where superHappyFunDiv is the ID of the container DIV and rows is a 0-based row index: function scrollTo(row) { var container = document.getElementById("superHappyFunDiv"); var rows = container.getElementsByTagName("tr");
row = Math.min(Math.max(row, 0), rows.length-1); container.scrollTop = rows[row].offsetTop; } Will attempt to scroll the requested row to the top of the container. Tested in IE6 and FF3. | CSS overflow table row positioning I have table inside a div tab. The table has 40 rows in it and the div's height is set to show 10 rows of that table. CSS's overflow:auto lets me scroll through the 40 rows. All is well there. How can I, with JavaScript cause the table to programatically position to a given row (i.e., programmatically scroll the table up or down by row)? | TITLE:
CSS overflow table row positioning
QUESTION:
I have table inside a div tab. The table has 40 rows in it and the div's height is set to show 10 rows of that table. CSS's overflow:auto lets me scroll through the 40 rows. All is well there. How can I, with JavaScript cause the table to programatically position to a given row (i.e., programmatically scroll the table up or down by row)?
ANSWER:
Where superHappyFunDiv is the ID of the container DIV and rows is a 0-based row index: function scrollTo(row) { var container = document.getElementById("superHappyFunDiv"); var rows = container.getElementsByTagName("tr");
row = Math.min(Math.max(row, 0), rows.length-1); container.scrollTop = rows[row].offsetTop; } Will attempt to scroll the requested row to the top of the container. Tested in IE6 and FF3. | [
"javascript",
"html",
"css",
"css-tables"
] | 8 | 16 | 3,220 | 1 | 0 | 2008-09-09T20:50:03.680000 | 2008-09-09T20:59:54.437000 |
52,874 | 52,966 | How do you keep the machine awake? | I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active. Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake. Any ideas? | I use this code to keep my workstation from locking. It's currently only set to move the mouse once every minute, you could easily adjust it though. It's a hack, not an elegant solution. import java.awt.*; import java.util.*; public class Hal{
public static void main(String[] args) throws Exception{ Robot hal = new Robot(); Random random = new Random(); while(true){ hal.delay(1000 * 60); int x = random.nextInt() % 640; int y = random.nextInt() % 480; hal.mouseMove(x,y); } } } | How do you keep the machine awake? I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active. Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake. Any ideas? | TITLE:
How do you keep the machine awake?
QUESTION:
I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active. Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake. Any ideas?
ANSWER:
I use this code to keep my workstation from locking. It's currently only set to move the mouse once every minute, you could easily adjust it though. It's a hack, not an elegant solution. import java.awt.*; import java.util.*; public class Hal{
public static void main(String[] args) throws Exception{ Robot hal = new Robot(); Random random = new Random(); while(true){ hal.delay(1000 * 60); int x = random.nextInt() % 640; int y = random.nextInt() % 480; hal.mouseMove(x,y); } } } | [
"java",
"windows",
"macos",
"cross-platform",
"operating-system"
] | 33 | 43 | 55,175 | 18 | 0 | 2008-09-09T20:50:12.430000 | 2008-09-09T21:31:14.920000 |
52,880 | 582,108 | Google Reader API Unread Count | Does Google Reader have an API and if so, how can I get the count of the number of unread posts for a specific user knowing their username and password? | This URL will give you a count of unread posts per feed. You can then iterate over the feeds and sum up the counts. http://www.google.com/reader/api/0/unread-count?all=true Here is a minimalist example in Python...parsing the xml/json and summing the counts is left as an exercise for the reader: import urllib import urllib2
username = 'username@gmail.com' password = '******'
# Authenticate to obtain SID auth_url = 'https://www.google.com/accounts/ClientLogin' auth_req_data = urllib.urlencode({'Email': username, 'Passwd': password, 'service': 'reader'}) auth_req = urllib2.Request(auth_url, data=auth_req_data) auth_resp = urllib2.urlopen(auth_req) auth_resp_content = auth_resp.read() auth_resp_dict = dict(x.split('=') for x in auth_resp_content.split('\n') if x) auth_token = auth_resp_dict["Auth"]
# Create a cookie in the header using the SID header = {} header['Authorization'] = 'GoogleLogin auth=%s' % auth_token
reader_base_url = 'http://www.google.com/reader/api/0/unread-count?%s' reader_req_data = urllib.urlencode({'all': 'true', 'output': 'xml'}) reader_url = reader_base_url % (reader_req_data) reader_req = urllib2.Request(reader_url, None, header) reader_resp = urllib2.urlopen(reader_req) reader_resp_content = reader_resp.read()
print reader_resp_content And some additional links on the topic: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI How do you access an authenticated Google App Engine service from a (non-web) python client? http://blog.gpowered.net/2007/08/google-reader-api-functions.html | Google Reader API Unread Count Does Google Reader have an API and if so, how can I get the count of the number of unread posts for a specific user knowing their username and password? | TITLE:
Google Reader API Unread Count
QUESTION:
Does Google Reader have an API and if so, how can I get the count of the number of unread posts for a specific user knowing their username and password?
ANSWER:
This URL will give you a count of unread posts per feed. You can then iterate over the feeds and sum up the counts. http://www.google.com/reader/api/0/unread-count?all=true Here is a minimalist example in Python...parsing the xml/json and summing the counts is left as an exercise for the reader: import urllib import urllib2
username = 'username@gmail.com' password = '******'
# Authenticate to obtain SID auth_url = 'https://www.google.com/accounts/ClientLogin' auth_req_data = urllib.urlencode({'Email': username, 'Passwd': password, 'service': 'reader'}) auth_req = urllib2.Request(auth_url, data=auth_req_data) auth_resp = urllib2.urlopen(auth_req) auth_resp_content = auth_resp.read() auth_resp_dict = dict(x.split('=') for x in auth_resp_content.split('\n') if x) auth_token = auth_resp_dict["Auth"]
# Create a cookie in the header using the SID header = {} header['Authorization'] = 'GoogleLogin auth=%s' % auth_token
reader_base_url = 'http://www.google.com/reader/api/0/unread-count?%s' reader_req_data = urllib.urlencode({'all': 'true', 'output': 'xml'}) reader_url = reader_base_url % (reader_req_data) reader_req = urllib2.Request(reader_url, None, header) reader_resp = urllib2.urlopen(reader_req) reader_resp_content = reader_resp.read()
print reader_resp_content And some additional links on the topic: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI How do you access an authenticated Google App Engine service from a (non-web) python client? http://blog.gpowered.net/2007/08/google-reader-api-functions.html | [
"api",
"google-reader"
] | 26 | 45 | 16,485 | 4 | 0 | 2008-09-09T20:53:57.337000 | 2009-02-24T15:06:23.323000 |
52,883 | 52,920 | Graph searching algorithm | I'm looking for a graph algorithm with some unusual properties. Each edge in the graph is either an "up" edge or a "down" edge. A valid path can go an indefinite number of "up"'s followed by an indefinite number of "down"'s, or vice versa. However it cannot change direction more than once. E.g., a valid path might be A "up" B "up" C "down" E "down" F an invalid path might be A "up" B "down" C "up" D What is a good algorithm for finding the shortest valid path between two nodes? What about finding all of the equal length shortest paths? | Assuming you don't have any heuristics, a variation of dijkstra's algorithm should suffice pretty well. Every time you consider a new edge, store information about its "ancestors". Then, check for the invariant (only one direction change), and backtrack if it is violated. The ancestors here are all the edges that were traversed to get to the current node, along the shortest path. One good way to store the ancestor information would be as a pair of numbers. If U is up, and D is down, a particular edge's ancestors could be UUUDDDD, which would be the pair 3, 4. You will not need a third number, because of the invariant. Since we have used dijkstra's algorithm, finding multiple shortest paths is already taken care of. | Graph searching algorithm I'm looking for a graph algorithm with some unusual properties. Each edge in the graph is either an "up" edge or a "down" edge. A valid path can go an indefinite number of "up"'s followed by an indefinite number of "down"'s, or vice versa. However it cannot change direction more than once. E.g., a valid path might be A "up" B "up" C "down" E "down" F an invalid path might be A "up" B "down" C "up" D What is a good algorithm for finding the shortest valid path between two nodes? What about finding all of the equal length shortest paths? | TITLE:
Graph searching algorithm
QUESTION:
I'm looking for a graph algorithm with some unusual properties. Each edge in the graph is either an "up" edge or a "down" edge. A valid path can go an indefinite number of "up"'s followed by an indefinite number of "down"'s, or vice versa. However it cannot change direction more than once. E.g., a valid path might be A "up" B "up" C "down" E "down" F an invalid path might be A "up" B "down" C "up" D What is a good algorithm for finding the shortest valid path between two nodes? What about finding all of the equal length shortest paths?
ANSWER:
Assuming you don't have any heuristics, a variation of dijkstra's algorithm should suffice pretty well. Every time you consider a new edge, store information about its "ancestors". Then, check for the invariant (only one direction change), and backtrack if it is violated. The ancestors here are all the edges that were traversed to get to the current node, along the shortest path. One good way to store the ancestor information would be as a pair of numbers. If U is up, and D is down, a particular edge's ancestors could be UUUDDDD, which would be the pair 3, 4. You will not need a third number, because of the invariant. Since we have used dijkstra's algorithm, finding multiple shortest paths is already taken care of. | [
"graph-theory",
"graph-algorithm"
] | 7 | 11 | 3,614 | 5 | 0 | 2008-09-09T20:54:55.617000 | 2008-09-09T21:14:27.100000 |
52,898 | 52,901 | What is the use of the square brackets [] in sql statements? | I've noticed that Visual Studio 2008 is placing square brackets around column names in SQL. Do the brackets offer any advantage? When I hand code T-SQL I've never bothered with them. Example: Visual Studio: SELECT [column1], [column2] etc... My own way: SELECT column1, column2 etc... | The brackets are required if you use keywords or special chars in the column names or identifiers. You could name a column [First Name] (with a space) – but then you'd need to use brackets every time you referred to that column. The newer tools add them everywhere just in case or for consistency. | What is the use of the square brackets [] in sql statements? I've noticed that Visual Studio 2008 is placing square brackets around column names in SQL. Do the brackets offer any advantage? When I hand code T-SQL I've never bothered with them. Example: Visual Studio: SELECT [column1], [column2] etc... My own way: SELECT column1, column2 etc... | TITLE:
What is the use of the square brackets [] in sql statements?
QUESTION:
I've noticed that Visual Studio 2008 is placing square brackets around column names in SQL. Do the brackets offer any advantage? When I hand code T-SQL I've never bothered with them. Example: Visual Studio: SELECT [column1], [column2] etc... My own way: SELECT column1, column2 etc...
ANSWER:
The brackets are required if you use keywords or special chars in the column names or identifiers. You could name a column [First Name] (with a space) – but then you'd need to use brackets every time you referred to that column. The newer tools add them everywhere just in case or for consistency. | [
"sql-server",
"t-sql"
] | 232 | 250 | 235,003 | 10 | 0 | 2008-09-09T21:02:24.337000 | 2008-09-09T21:03:57.483000 |
52,917 | 53,040 | How do you delete wild card cookies in Rails? | How do you delete a cookie in rails that was set with a wild card domain: cookies[:foo] = {:value => 'bar',:domain => '.acme.com'} When, following the docs, you do: cookies.delete:foo the logs say Cookie set: foo=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT Notice that the domain is missing (it seems to use the default params for everything). Respecting the RFC, of course the cookie's still there, Browser -> ctrl / cmd - L -> javascript:alert(document.cookie); Voilà! Q: What's the "correct" way to delete such a cookie? | Pass the:domain on delete as well. Here's the source of that method: # Removes the cookie on the client machine by setting the value to an empty string # and setting its expiration date into the past. Like []=, you can pass in an options # hash to delete cookies with extra data such as a +path+. def delete(name, options = {}) options.stringify_keys! set_cookie(options.merge("name" => name.to_s, "value" => "", "expires" => Time.at(0))) end As you can see, it just sets an empty cookie with the name you gave, set to expire in 1969, and with no contents. But it does merge in any other options you give, so you can do: cookies.delete:foo,:domain => '.acme.com' And you're set. | How do you delete wild card cookies in Rails? How do you delete a cookie in rails that was set with a wild card domain: cookies[:foo] = {:value => 'bar',:domain => '.acme.com'} When, following the docs, you do: cookies.delete:foo the logs say Cookie set: foo=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT Notice that the domain is missing (it seems to use the default params for everything). Respecting the RFC, of course the cookie's still there, Browser -> ctrl / cmd - L -> javascript:alert(document.cookie); Voilà! Q: What's the "correct" way to delete such a cookie? | TITLE:
How do you delete wild card cookies in Rails?
QUESTION:
How do you delete a cookie in rails that was set with a wild card domain: cookies[:foo] = {:value => 'bar',:domain => '.acme.com'} When, following the docs, you do: cookies.delete:foo the logs say Cookie set: foo=; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT Notice that the domain is missing (it seems to use the default params for everything). Respecting the RFC, of course the cookie's still there, Browser -> ctrl / cmd - L -> javascript:alert(document.cookie); Voilà! Q: What's the "correct" way to delete such a cookie?
ANSWER:
Pass the:domain on delete as well. Here's the source of that method: # Removes the cookie on the client machine by setting the value to an empty string # and setting its expiration date into the past. Like []=, you can pass in an options # hash to delete cookies with extra data such as a +path+. def delete(name, options = {}) options.stringify_keys! set_cookie(options.merge("name" => name.to_s, "value" => "", "expires" => Time.at(0))) end As you can see, it just sets an empty cookie with the name you gave, set to expire in 1969, and with no contents. But it does merge in any other options you give, so you can do: cookies.delete:foo,:domain => '.acme.com' And you're set. | [
"ruby-on-rails"
] | 13 | 20 | 8,903 | 1 | 0 | 2008-09-09T21:12:45.620000 | 2008-09-09T22:23:35.297000 |
52,924 | 52,936 | Separating CSS deployment from rest of site | Where I work, the design and development departments are totally separated, however we (the design department) are responsible for managing the CSS for our sites. Typically, new CSS needs to be released to the production server far more often than new website code. Because of this, we are deploying the CSS separately, and it lives outside source control. However, lately, we've run into a few problems with new CSS not being synched for up site releases, and in general the process is a huge headache. I've been pushing to get the CSS under some kind of source control, but having trouble finding a good deployment method that makes everyone happy. Our biggest problem is managing changes that affect current portions of the site, where the CSS changes need to go live before the site changes, but not break anything on the exisiting site. I won't go into the finer details of the weird culture between designers and devs here, but I was wondering what experience others have had in managing large amounts of CSS (50+ files, thousands and thousands of lines) that needs to be constantly updated and released independent of site releases. | I'll advocate the use of source control here. Especially if the development team uses branching to deal with structured releases. That way, whatever CSS is checked into the production branch is what should be deployed... and if it is updated mid-stream, it's the responsibility of the person (designer?) that updates it to promote that code using whatever system your company uses to promote changes to production. | Separating CSS deployment from rest of site Where I work, the design and development departments are totally separated, however we (the design department) are responsible for managing the CSS for our sites. Typically, new CSS needs to be released to the production server far more often than new website code. Because of this, we are deploying the CSS separately, and it lives outside source control. However, lately, we've run into a few problems with new CSS not being synched for up site releases, and in general the process is a huge headache. I've been pushing to get the CSS under some kind of source control, but having trouble finding a good deployment method that makes everyone happy. Our biggest problem is managing changes that affect current portions of the site, where the CSS changes need to go live before the site changes, but not break anything on the exisiting site. I won't go into the finer details of the weird culture between designers and devs here, but I was wondering what experience others have had in managing large amounts of CSS (50+ files, thousands and thousands of lines) that needs to be constantly updated and released independent of site releases. | TITLE:
Separating CSS deployment from rest of site
QUESTION:
Where I work, the design and development departments are totally separated, however we (the design department) are responsible for managing the CSS for our sites. Typically, new CSS needs to be released to the production server far more often than new website code. Because of this, we are deploying the CSS separately, and it lives outside source control. However, lately, we've run into a few problems with new CSS not being synched for up site releases, and in general the process is a huge headache. I've been pushing to get the CSS under some kind of source control, but having trouble finding a good deployment method that makes everyone happy. Our biggest problem is managing changes that affect current portions of the site, where the CSS changes need to go live before the site changes, but not break anything on the exisiting site. I won't go into the finer details of the weird culture between designers and devs here, but I was wondering what experience others have had in managing large amounts of CSS (50+ files, thousands and thousands of lines) that needs to be constantly updated and released independent of site releases.
ANSWER:
I'll advocate the use of source control here. Especially if the development team uses branching to deal with structured releases. That way, whatever CSS is checked into the production branch is what should be deployed... and if it is updated mid-stream, it's the responsibility of the person (designer?) that updates it to promote that code using whatever system your company uses to promote changes to production. | [
"css",
"version-control",
"deployment",
"project-management"
] | 5 | 4 | 398 | 2 | 0 | 2008-09-09T21:17:55.063000 | 2008-09-09T21:21:30.390000 |
52,927 | 52,940 | Console.WriteLine and generic List | I frequently find myself writing code like this: List list = new List { 1, 3, 5 }; foreach (int i in list) { Console.Write("{0}\t", i.ToString()); } Console.WriteLine(); Better would be something like this: List list = new List { 1, 3, 5 }; Console.WriteLine("{0}\t", list); I suspect there's some clever way of doing this, but I don't see it. Does anybody have a better solution than the first block? | Do this: list.ForEach(i => Console.Write("{0}\t", i)); EDIT: To others that have responded - he wants them all on the same line, with tabs between them.:) | Console.WriteLine and generic List I frequently find myself writing code like this: List list = new List { 1, 3, 5 }; foreach (int i in list) { Console.Write("{0}\t", i.ToString()); } Console.WriteLine(); Better would be something like this: List list = new List { 1, 3, 5 }; Console.WriteLine("{0}\t", list); I suspect there's some clever way of doing this, but I don't see it. Does anybody have a better solution than the first block? | TITLE:
Console.WriteLine and generic List
QUESTION:
I frequently find myself writing code like this: List list = new List { 1, 3, 5 }; foreach (int i in list) { Console.Write("{0}\t", i.ToString()); } Console.WriteLine(); Better would be something like this: List list = new List { 1, 3, 5 }; Console.WriteLine("{0}\t", list); I suspect there's some clever way of doing this, but I don't see it. Does anybody have a better solution than the first block?
ANSWER:
Do this: list.ForEach(i => Console.Write("{0}\t", i)); EDIT: To others that have responded - he wants them all on the same line, with tabs between them.:) | [
"c#",
"generics",
"console",
"list"
] | 55 | 111 | 167,289 | 9 | 0 | 2008-09-09T21:19:44.060000 | 2008-09-09T21:22:46.253000 |
52,931 | 52,948 | Open Source Actionscript 3 or Javascript date utility classes? | I was wondering if anyone could point to an Open Source date utility class that is fairly robust. I find myself rolling my own when I want to do a lot of things I take for granted in C# and Java. For instance I did find a decent example of a DateDiff() function that I tore apart and another DatePart() function. Another examples would be parsing different date/time formats. I'm trying to avoid reinventing something if it's already built. Another possibility may be a nice set of Javascript files that I can convert to ActionScript 3. So far I've found DateJS but I want to get a good idea of what is out there. | as3corelib has the DateUtil class and it should be pretty reliable since it's written by some Adobe employees. I haven't encountered any problems with it. | Open Source Actionscript 3 or Javascript date utility classes? I was wondering if anyone could point to an Open Source date utility class that is fairly robust. I find myself rolling my own when I want to do a lot of things I take for granted in C# and Java. For instance I did find a decent example of a DateDiff() function that I tore apart and another DatePart() function. Another examples would be parsing different date/time formats. I'm trying to avoid reinventing something if it's already built. Another possibility may be a nice set of Javascript files that I can convert to ActionScript 3. So far I've found DateJS but I want to get a good idea of what is out there. | TITLE:
Open Source Actionscript 3 or Javascript date utility classes?
QUESTION:
I was wondering if anyone could point to an Open Source date utility class that is fairly robust. I find myself rolling my own when I want to do a lot of things I take for granted in C# and Java. For instance I did find a decent example of a DateDiff() function that I tore apart and another DatePart() function. Another examples would be parsing different date/time formats. I'm trying to avoid reinventing something if it's already built. Another possibility may be a nice set of Javascript files that I can convert to ActionScript 3. So far I've found DateJS but I want to get a good idea of what is out there.
ANSWER:
as3corelib has the DateUtil class and it should be pretty reliable since it's written by some Adobe employees. I haven't encountered any problems with it. | [
"javascript",
"apache-flex",
"actionscript-3"
] | 2 | 3 | 1,428 | 2 | 0 | 2008-09-09T21:19:53.857000 | 2008-09-09T21:24:23.517000 |
52,933 | 52,955 | How can I modify a Work Item type to include additional information in TFS? | TFS2008. I'd like to track task points on a Task work item, but there isn't anywhere (other than the description) to record this. I'd like to add a dropdown with 0, 1, 2, 3, 5, 8, etc, so these task points can be exported in reports. | Use the process template editor, available as part of the Visual Studio Team System 2008 Team Foundation Server Power Tools. | How can I modify a Work Item type to include additional information in TFS? TFS2008. I'd like to track task points on a Task work item, but there isn't anywhere (other than the description) to record this. I'd like to add a dropdown with 0, 1, 2, 3, 5, 8, etc, so these task points can be exported in reports. | TITLE:
How can I modify a Work Item type to include additional information in TFS?
QUESTION:
TFS2008. I'd like to track task points on a Task work item, but there isn't anywhere (other than the description) to record this. I'd like to add a dropdown with 0, 1, 2, 3, 5, 8, etc, so these task points can be exported in reports.
ANSWER:
Use the process template editor, available as part of the Visual Studio Team System 2008 Team Foundation Server Power Tools. | [
"tfs",
"workitem"
] | 4 | 4 | 1,459 | 2 | 0 | 2008-09-09T21:20:38.800000 | 2008-09-09T21:26:39.987000 |
52,950 | 53,116 | How to make git ignore changes in case? | I'm not too sure what is going on here, but sometimes a particular file in my repository will change the case of its name. e.g.,: before: File.h after: file.h I don't really care why this is happening, but this causes git to think it is a new file, and then I have to go and change the file name back. Can you just make git ignore case changes? [edit] I suspect it is Visual Studio doing something weird with that particular file, because it seems to happen most often when I open and save it after changes. I don't have any way to fix bugs in VS however, but git should be a bit more capable I hope. | Since version 1.5.6 there is an ignorecase option available in the [core] section of.git/config e.g. add ignorecase = true To change it for just one repo, from that folder run: git config core.ignorecase true To change it globally: git config --global core.ignorecase true | How to make git ignore changes in case? I'm not too sure what is going on here, but sometimes a particular file in my repository will change the case of its name. e.g.,: before: File.h after: file.h I don't really care why this is happening, but this causes git to think it is a new file, and then I have to go and change the file name back. Can you just make git ignore case changes? [edit] I suspect it is Visual Studio doing something weird with that particular file, because it seems to happen most often when I open and save it after changes. I don't have any way to fix bugs in VS however, but git should be a bit more capable I hope. | TITLE:
How to make git ignore changes in case?
QUESTION:
I'm not too sure what is going on here, but sometimes a particular file in my repository will change the case of its name. e.g.,: before: File.h after: file.h I don't really care why this is happening, but this causes git to think it is a new file, and then I have to go and change the file name back. Can you just make git ignore case changes? [edit] I suspect it is Visual Studio doing something weird with that particular file, because it seems to happen most often when I open and save it after changes. I don't have any way to fix bugs in VS however, but git should be a bit more capable I hope.
ANSWER:
Since version 1.5.6 there is an ignorecase option available in the [core] section of.git/config e.g. add ignorecase = true To change it for just one repo, from that folder run: git config core.ignorecase true To change it globally: git config --global core.ignorecase true | [
"windows",
"git"
] | 150 | 242 | 142,929 | 7 | 0 | 2008-09-09T21:24:53.300000 | 2008-09-09T23:14:28.667000 |
52,954 | 52,973 | How do I get javadoc to link to the Java API using an Ant task? | Right now my ant task looks like. And I'm getting this warning: javadoc: warning - Error fetching URL: http://java.sun.com/j2se/1.5.0/docs/api/package-list How do I get the javadoc to properly link to the API? I am behind a proxy. | You probably need the http.proxyHost and http.proxyPort system properties set. For example, ANT_OPTS="-Dhttp.proxyHost=proxy.y.com" ant doc Alternatively, you could set the "offline" flag and provide a package list, but that could be a pain for the Java core. | How do I get javadoc to link to the Java API using an Ant task? Right now my ant task looks like. And I'm getting this warning: javadoc: warning - Error fetching URL: http://java.sun.com/j2se/1.5.0/docs/api/package-list How do I get the javadoc to properly link to the API? I am behind a proxy. | TITLE:
How do I get javadoc to link to the Java API using an Ant task?
QUESTION:
Right now my ant task looks like. And I'm getting this warning: javadoc: warning - Error fetching URL: http://java.sun.com/j2se/1.5.0/docs/api/package-list How do I get the javadoc to properly link to the API? I am behind a proxy.
ANSWER:
You probably need the http.proxyHost and http.proxyPort system properties set. For example, ANT_OPTS="-Dhttp.proxyHost=proxy.y.com" ant doc Alternatively, you could set the "offline" flag and provide a package list, but that could be a pain for the Java core. | [
"ant",
"javadoc"
] | 4 | 6 | 6,635 | 3 | 0 | 2008-09-09T21:26:33.920000 | 2008-09-09T21:34:40.483000 |
52,964 | 52,988 | SQL Server Random Sort | What is the best way to sort the results of a sql query into a random order within a stored procedure? | This is a duplicate of SO# 19412. Here's the answer I gave there: select top 1 * from mytable order by newid() In SQL Server 2005 and up, you can use TABLESAMPLE to get a random sample that's repeatable: SELECT FirstName, LastName FROM Contact TABLESAMPLE (1 ROWS); | SQL Server Random Sort What is the best way to sort the results of a sql query into a random order within a stored procedure? | TITLE:
SQL Server Random Sort
QUESTION:
What is the best way to sort the results of a sql query into a random order within a stored procedure?
ANSWER:
This is a duplicate of SO# 19412. Here's the answer I gave there: select top 1 * from mytable order by newid() In SQL Server 2005 and up, you can use TABLESAMPLE to get a random sample that's repeatable: SELECT FirstName, LastName FROM Contact TABLESAMPLE (1 ROWS); | [
"sql-server"
] | 59 | 87 | 63,374 | 4 | 0 | 2008-09-09T21:31:07.933000 | 2008-09-09T21:40:24.727000 |
52,965 | 52,978 | Is business logic subjective? | I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = @ID — he would call this “business logic” What approach should I take to define “business logic” in a very objective way without offending my team lead? | I really think you just need to agree on a clear definition of what you mean when you say "business logic". If you need to be "politically sensitive", you could even craft the definition around your team lead's understanding, then come up with another term ("domain rules"?) that defines what you want to talk about. Words and terms are relatively subjective -- of course, once you leave that company you will need to 're-learn' industry standards, so it's always better to stick with them if you can, but the main goal is to communicate clearly and get work done. | Is business logic subjective? I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = @ID — he would call this “business logic” What approach should I take to define “business logic” in a very objective way without offending my team lead? | TITLE:
Is business logic subjective?
QUESTION:
I have a team lead who seems to think that business logic is very subjective, to the point that if my stored procedure has a WHERE ID = @ID — he would call this “business logic” What approach should I take to define “business logic” in a very objective way without offending my team lead?
ANSWER:
I really think you just need to agree on a clear definition of what you mean when you say "business logic". If you need to be "politically sensitive", you could even craft the definition around your team lead's understanding, then come up with another term ("domain rules"?) that defines what you want to talk about. Words and terms are relatively subjective -- of course, once you leave that company you will need to 're-learn' industry standards, so it's always better to stick with them if you can, but the main goal is to communicate clearly and get work done. | [
"business-logic"
] | 6 | 9 | 763 | 3 | 0 | 2008-09-09T21:31:12.643000 | 2008-09-09T21:35:54.427000 |
52,981 | 53,231 | Cannot delete from the database...? | So, I have 2 database instances, one is for development in general, another was copied from development for unit tests. Something changed in the development database that I can't figure out, and I don't know how to see what is different. When I try to delete from a particular table, with for example: delete from myschema.mytable where id = 555 I get the following normal response from the unit test DB indicating no row was deleted: SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000 However, the development database fails to delete at all with the following error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884 My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is? (note that this is a DB2 database) | Hmm, applying the great oracle to this question, I came up with: http://bytes.com/forum/thread830774.html It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well) Does that help any? | Cannot delete from the database...? So, I have 2 database instances, one is for development in general, another was copied from development for unit tests. Something changed in the development database that I can't figure out, and I don't know how to see what is different. When I try to delete from a particular table, with for example: delete from myschema.mytable where id = 555 I get the following normal response from the unit test DB indicating no row was deleted: SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000 However, the development database fails to delete at all with the following error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884 My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is? (note that this is a DB2 database) | TITLE:
Cannot delete from the database...?
QUESTION:
So, I have 2 database instances, one is for development in general, another was copied from development for unit tests. Something changed in the development database that I can't figure out, and I don't know how to see what is different. When I try to delete from a particular table, with for example: delete from myschema.mytable where id = 555 I get the following normal response from the unit test DB indicating no row was deleted: SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000 However, the development database fails to delete at all with the following error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884 My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is? (note that this is a DB2 database)
ANSWER:
Hmm, applying the great oracle to this question, I came up with: http://bytes.com/forum/thread830774.html It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well) Does that help any? | [
"database",
"db2"
] | 1 | 1 | 3,172 | 7 | 0 | 2008-09-09T21:36:35.267000 | 2008-09-10T00:37:44.630000 |
52,984 | 52,994 | How do I generate Emma code coverage reports using Ant? | How do I setup an Ant task to generate Emma code coverage reports? | To answer questions about where the source and instrumented directories are (these can be switched to whatever your standard directory structure is): Classpaths: First you need to setup where Ant can find the Emma libraries: Then import the task: Then instrument the code: Then run a target with the proper VM arguments like: Finally generate your report: | How do I generate Emma code coverage reports using Ant? How do I setup an Ant task to generate Emma code coverage reports? | TITLE:
How do I generate Emma code coverage reports using Ant?
QUESTION:
How do I setup an Ant task to generate Emma code coverage reports?
ANSWER:
To answer questions about where the source and instrumented directories are (these can be switched to whatever your standard directory structure is): Classpaths: First you need to setup where Ant can find the Emma libraries: Then import the task: Then instrument the code: Then run a target with the proper VM arguments like: Finally generate your report: | [
"ant",
"code-analysis",
"emma"
] | 26 | 14 | 26,538 | 3 | 0 | 2008-09-09T21:37:43.813000 | 2008-09-09T21:48:46.703000 |
52,989 | 53,106 | Using generic classes with ObjectDataSource | I have a generic Repository class I want to use with an ObjectDataSource. Repository lives in a separate project called DataAccess. According to this post from the MS newsgroups (relevant part copied below): Internally, the ObjectDataSource is calling Type.GetType(string) to get the type, so we need to follow the guideline documented in Type.GetType on how to get type using generics. You can refer to MSDN Library on Type.GetType: http://msdn2.microsoft.com/en-us/library/w3f99sx1.aspx From the document, you will learn that you need to use backtick (`) to denotes the type name which is using generics. Also, here we must specify the assembly name in the type name string. So, for your question, the answer is to use type name like follows: TypeName="TestObjectDataSourceAssembly.MyDataHandler`1[System.String],TestObjectDataSourceAssembly" Okay, makes sense. When I try it, however, the page throws an exception: [InvalidOperationException: The type specified in the TypeName property of ObjectDataSource 'MyDataSource' could not be found.] The curious thing is that this only happens when I'm viewing the page. When I open the "Configure Data Source" dialog from the VS2008 designer, it properly shows me the methods on my generic Repository class. Passing the TypeName string to Type.GetType() while debugging also returns a valid type. So what gives? | Do something like this. Type type = typeof(Repository get the value of assemblyQualifiedName and paste it into the TypeName field. Note that Type.GetType(string), the value passed in must be The assembly-qualified name of the type to get. See AssemblyQualifiedName. If the type is in the currently executing assembly or in Mscorlib.dll, it is sufficient to supply the type name qualified by its namespace. So, it may work by passing in that string in your code, because that class is in the currently executing assembly (where you are calling it), where as the ObjectDataSource is not. Most likely the type you are looking for is MyProject.Repository`1[MyProject.MessageCategory, DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null], DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null | Using generic classes with ObjectDataSource I have a generic Repository class I want to use with an ObjectDataSource. Repository lives in a separate project called DataAccess. According to this post from the MS newsgroups (relevant part copied below): Internally, the ObjectDataSource is calling Type.GetType(string) to get the type, so we need to follow the guideline documented in Type.GetType on how to get type using generics. You can refer to MSDN Library on Type.GetType: http://msdn2.microsoft.com/en-us/library/w3f99sx1.aspx From the document, you will learn that you need to use backtick (`) to denotes the type name which is using generics. Also, here we must specify the assembly name in the type name string. So, for your question, the answer is to use type name like follows: TypeName="TestObjectDataSourceAssembly.MyDataHandler`1[System.String],TestObjectDataSourceAssembly" Okay, makes sense. When I try it, however, the page throws an exception: [InvalidOperationException: The type specified in the TypeName property of ObjectDataSource 'MyDataSource' could not be found.] The curious thing is that this only happens when I'm viewing the page. When I open the "Configure Data Source" dialog from the VS2008 designer, it properly shows me the methods on my generic Repository class. Passing the TypeName string to Type.GetType() while debugging also returns a valid type. So what gives? | TITLE:
Using generic classes with ObjectDataSource
QUESTION:
I have a generic Repository class I want to use with an ObjectDataSource. Repository lives in a separate project called DataAccess. According to this post from the MS newsgroups (relevant part copied below): Internally, the ObjectDataSource is calling Type.GetType(string) to get the type, so we need to follow the guideline documented in Type.GetType on how to get type using generics. You can refer to MSDN Library on Type.GetType: http://msdn2.microsoft.com/en-us/library/w3f99sx1.aspx From the document, you will learn that you need to use backtick (`) to denotes the type name which is using generics. Also, here we must specify the assembly name in the type name string. So, for your question, the answer is to use type name like follows: TypeName="TestObjectDataSourceAssembly.MyDataHandler`1[System.String],TestObjectDataSourceAssembly" Okay, makes sense. When I try it, however, the page throws an exception: [InvalidOperationException: The type specified in the TypeName property of ObjectDataSource 'MyDataSource' could not be found.] The curious thing is that this only happens when I'm viewing the page. When I open the "Configure Data Source" dialog from the VS2008 designer, it properly shows me the methods on my generic Repository class. Passing the TypeName string to Type.GetType() while debugging also returns a valid type. So what gives?
ANSWER:
Do something like this. Type type = typeof(Repository get the value of assemblyQualifiedName and paste it into the TypeName field. Note that Type.GetType(string), the value passed in must be The assembly-qualified name of the type to get. See AssemblyQualifiedName. If the type is in the currently executing assembly or in Mscorlib.dll, it is sufficient to supply the type name qualified by its namespace. So, it may work by passing in that string in your code, because that class is in the currently executing assembly (where you are calling it), where as the ObjectDataSource is not. Most likely the type you are looking for is MyProject.Repository`1[MyProject.MessageCategory, DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null], DataAccess, Version=1.0.0.0, Culture=neutral, PublicKey=null | [
"asp.net",
"generics",
"objectdatasource"
] | 9 | 13 | 5,763 | 3 | 0 | 2008-09-09T21:43:55.220000 | 2008-09-09T23:08:12.020000 |
53,002 | 53,016 | How do I create tri-state checkboxes with a TreeView control in .NET? | I have a treeview control in a Windows Forms project that has checkboxes turned on. Because the treeview control has nested nodes, I need the checkboxes to be able to have some sort of tri-mode selection. I can't find a way to do this (I can only have the checkboxes fully checked or unchecked). | If you are talking about Windows Forms, this article should help you build you tri-state TreeView: http://www.codeproject.com/KB/tree/treeviewex2003.aspx?display=Print If you need tri-state checkboxes on a treeview on asp.net i think you need to use a third-party component. Take a look a this one, and click "tri-state checkboxes" on the left side: http://www.aspnetexpert.com/demos/tree/default.aspx | How do I create tri-state checkboxes with a TreeView control in .NET? I have a treeview control in a Windows Forms project that has checkboxes turned on. Because the treeview control has nested nodes, I need the checkboxes to be able to have some sort of tri-mode selection. I can't find a way to do this (I can only have the checkboxes fully checked or unchecked). | TITLE:
How do I create tri-state checkboxes with a TreeView control in .NET?
QUESTION:
I have a treeview control in a Windows Forms project that has checkboxes turned on. Because the treeview control has nested nodes, I need the checkboxes to be able to have some sort of tri-mode selection. I can't find a way to do this (I can only have the checkboxes fully checked or unchecked).
ANSWER:
If you are talking about Windows Forms, this article should help you build you tri-state TreeView: http://www.codeproject.com/KB/tree/treeviewex2003.aspx?display=Print If you need tri-state checkboxes on a treeview on asp.net i think you need to use a third-party component. Take a look a this one, and click "tri-state checkboxes" on the left side: http://www.aspnetexpert.com/demos/tree/default.aspx | [
".net",
"asp.net",
"winforms"
] | 6 | 4 | 9,971 | 1 | 0 | 2008-09-09T21:56:27.170000 | 2008-09-09T22:07:21.277000 |
53,012 | 176,110 | Algorithm to decide if digital audio data is clipping? | Is there an algorithm or some heuristic to decide whether digital audio data is clipping? | The simple answer is that if any sample has the maximum or minimum value (-32768 and +32767 respectively for 16 bit samples), you can consider it clipping. This isn't stricly true, since that value may actually be the correct value, but there is no way to tell whether +32767 really should have been +33000. For a more complicated answer: There is such a thing as sample counting clipping detectors that require x consecutive samples to be at the max/min value for them to be considered clipping (where x may be as high as 7). The theory here is that clipping in just a few samples is not audible. That said, there is audio equipment that clips quite audible even at values below the maximum (and above the minimum). Typical advice is to master music to peak at -0.3 dB instead of 0.0 dB for this reason. You might want to consider any sample above that level to be clipping. It all depends on what you need it for. | Algorithm to decide if digital audio data is clipping? Is there an algorithm or some heuristic to decide whether digital audio data is clipping? | TITLE:
Algorithm to decide if digital audio data is clipping?
QUESTION:
Is there an algorithm or some heuristic to decide whether digital audio data is clipping?
ANSWER:
The simple answer is that if any sample has the maximum or minimum value (-32768 and +32767 respectively for 16 bit samples), you can consider it clipping. This isn't stricly true, since that value may actually be the correct value, but there is no way to tell whether +32767 really should have been +33000. For a more complicated answer: There is such a thing as sample counting clipping detectors that require x consecutive samples to be at the max/min value for them to be considered clipping (where x may be as high as 7). The theory here is that clipping in just a few samples is not audible. That said, there is audio equipment that clips quite audible even at values below the maximum (and above the minimum). Typical advice is to master music to peak at -0.3 dB instead of 0.0 dB for this reason. You might want to consider any sample above that level to be clipping. It all depends on what you need it for. | [
"algorithm",
"audio",
"heuristics"
] | 4 | 7 | 2,315 | 5 | 0 | 2008-09-09T22:04:02.690000 | 2008-10-06T20:41:41.050000 |
53,019 | 53,067 | What kind of technologies are available for sending text messages? | I'm looking into sending regular automated text-messages to a list of subscribed users. Having played with Windows Mobile devices, I could easily implement this using the compact.Net framework + a device hooked up to usb and send the messages through this. I would like to explore other solutions like having a server or something similar to do this. I just have no idea what is involved in such a system. | It really all depends on how many text messages you intend to send and how critical it is that the message arrives on time (and, actually arrives). SMS Aggregators For larger volume and good reliability, you will want to go with an SMS aggregator. These aggregators have web service API's (or SMPP) that you can use to send your message and find out whether your message was delivered over time. Some examples of aggregators with whom I have experience are Air2Web, mBlox, etc. The nice thing about working with an aggregator is that they can guide you through what it takes to send effective messages. For example, if you want your own, distinct, shortcode they can navigate the process with the carriers to secure that shortcode. They can also make sure that you are in compliance with any rules regarding using SMS. Carriers will flat shut you off if you don't respect the use of SMS and only use SMS within the bounds of what you agreed to when you started to use the aggregator. If you overstep your bounds, they have the aggregator relationships to prevent any service interruptions. You'll pay per message and may have a baseline service fee. All if this is determined by your volume. SMTP to SMS If you want an unreliable, low-rent solution to a low number of known addresses, you can use an SMTP to SMS solution. In this case you simply find out the mobile provider for the recipient and use their mobile provider's e-mail scheme to send the message. An example of this is 7705551212@cellcompany.com. In this scenario, you send the message and it is gone and you hope that it gets there. You really don't know if it is making it. Also, some providers limit how messages come in via their SMTP to SMS gateway to limit SMS spam. But, that scenario is the very easiest to use from virtually any programming language. There are a million C# examples of how to send e-mail and this way would be no different. This is the most cost-effective solution (i.e. free) until you get a large volume of messages. When you start doing too much of this, the carriers might step in when they find that you are sending a ton of messages through their SMTP to SMS gateway. Effective Texting In many cases you have to make sure that recipients have properly opted-in to your service. This is only a big deal if your texts are going to a really large population. You'll want to remember that text messages are short (keep it to less than 140 to 160 characters). When you program things you'll want to bake that in or you might accidentally send multipart messages. Don't forget that you will want to make sure that your recipients realize they might have to pay for the incoming text messages. In a world of unlimited text plans this is less and less of a concern. | What kind of technologies are available for sending text messages? I'm looking into sending regular automated text-messages to a list of subscribed users. Having played with Windows Mobile devices, I could easily implement this using the compact.Net framework + a device hooked up to usb and send the messages through this. I would like to explore other solutions like having a server or something similar to do this. I just have no idea what is involved in such a system. | TITLE:
What kind of technologies are available for sending text messages?
QUESTION:
I'm looking into sending regular automated text-messages to a list of subscribed users. Having played with Windows Mobile devices, I could easily implement this using the compact.Net framework + a device hooked up to usb and send the messages through this. I would like to explore other solutions like having a server or something similar to do this. I just have no idea what is involved in such a system.
ANSWER:
It really all depends on how many text messages you intend to send and how critical it is that the message arrives on time (and, actually arrives). SMS Aggregators For larger volume and good reliability, you will want to go with an SMS aggregator. These aggregators have web service API's (or SMPP) that you can use to send your message and find out whether your message was delivered over time. Some examples of aggregators with whom I have experience are Air2Web, mBlox, etc. The nice thing about working with an aggregator is that they can guide you through what it takes to send effective messages. For example, if you want your own, distinct, shortcode they can navigate the process with the carriers to secure that shortcode. They can also make sure that you are in compliance with any rules regarding using SMS. Carriers will flat shut you off if you don't respect the use of SMS and only use SMS within the bounds of what you agreed to when you started to use the aggregator. If you overstep your bounds, they have the aggregator relationships to prevent any service interruptions. You'll pay per message and may have a baseline service fee. All if this is determined by your volume. SMTP to SMS If you want an unreliable, low-rent solution to a low number of known addresses, you can use an SMTP to SMS solution. In this case you simply find out the mobile provider for the recipient and use their mobile provider's e-mail scheme to send the message. An example of this is 7705551212@cellcompany.com. In this scenario, you send the message and it is gone and you hope that it gets there. You really don't know if it is making it. Also, some providers limit how messages come in via their SMTP to SMS gateway to limit SMS spam. But, that scenario is the very easiest to use from virtually any programming language. There are a million C# examples of how to send e-mail and this way would be no different. This is the most cost-effective solution (i.e. free) until you get a large volume of messages. When you start doing too much of this, the carriers might step in when they find that you are sending a ton of messages through their SMTP to SMS gateway. Effective Texting In many cases you have to make sure that recipients have properly opted-in to your service. This is only a big deal if your texts are going to a really large population. You'll want to remember that text messages are short (keep it to less than 140 to 160 characters). When you program things you'll want to bake that in or you might accidentally send multipart messages. Don't forget that you will want to make sure that your recipients realize they might have to pay for the incoming text messages. In a world of unlimited text plans this is less and less of a concern. | [
"c#",
"mobile",
"messaging"
] | 7 | 14 | 3,898 | 5 | 0 | 2008-09-09T22:10:43.250000 | 2008-09-09T22:40:29.793000 |
53,025 | 53,843 | Best way to implement 1:1 asynchronous callbacks/events in ActionScript 3 / Flex / AIR? | I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between: whoever instantiated a given command object and the command object, the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls. Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships. Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way? Here's an example to illustrate my current method: I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor. Then the command object would repeat this pattern with its connection to the "data access" object. When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called. | I'll try one more idea: Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code): public function deleteThing( id: String ): DeferredResponse { var deferredResponse: DeferredResponse = new DeferredResponse();
var asyncToken: AsyncToken = theRemoteObject.deleteThing(id);
var result: Function = function( o: Object ): void { deferredResponse.notifyResultListeners(o); }
var fault: Function = function( o: Object ): void { deferredResponse.notifyFaultListeners(o); }
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken; } The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure. Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response. In the command it would look something like this: public function execute( ): void { var deferredResponse: DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult); deferredResponse.addEventListener(FaultEvent.FAULT, onFault); } or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered. But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate. | Best way to implement 1:1 asynchronous callbacks/events in ActionScript 3 / Flex / AIR? I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between: whoever instantiated a given command object and the command object, the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls. Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships. Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way? Here's an example to illustrate my current method: I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor. Then the command object would repeat this pattern with its connection to the "data access" object. When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called. | TITLE:
Best way to implement 1:1 asynchronous callbacks/events in ActionScript 3 / Flex / AIR?
QUESTION:
I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between: whoever instantiated a given command object and the command object, the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls. Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships. Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way? Here's an example to illustrate my current method: I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor. Then the command object would repeat this pattern with its connection to the "data access" object. When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called.
ANSWER:
I'll try one more idea: Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code): public function deleteThing( id: String ): DeferredResponse { var deferredResponse: DeferredResponse = new DeferredResponse();
var asyncToken: AsyncToken = theRemoteObject.deleteThing(id);
var result: Function = function( o: Object ): void { deferredResponse.notifyResultListeners(o); }
var fault: Function = function( o: Object ): void { deferredResponse.notifyFaultListeners(o); }
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken; } The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure. Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response. In the command it would look something like this: public function execute( ): void { var deferredResponse: DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult); deferredResponse.addEventListener(FaultEvent.FAULT, onFault); } or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered. But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate. | [
"apache-flex",
"actionscript-3",
"asynchronous",
"air"
] | 1 | 3 | 6,696 | 3 | 0 | 2008-09-09T22:13:54.100000 | 2008-09-10T12:06:00.653000 |
53,027 | 53,037 | Large Python Includes | I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements. Is there some way to group together several files and include them all at once? | Put files in one folder. Add __init__.py file to the folder. Do necessary imports in __init__.py Replace multiple imports by one: import folder_name See Python Package Management | Large Python Includes I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements. Is there some way to group together several files and include them all at once? | TITLE:
Large Python Includes
QUESTION:
I have a file that I want to include in Python but the included file is fairly long and it'd be much neater to be able to split them into several files but then I have to use several include statements. Is there some way to group together several files and include them all at once?
ANSWER:
Put files in one folder. Add __init__.py file to the folder. Do necessary imports in __init__.py Replace multiple imports by one: import folder_name See Python Package Management | [
"python"
] | 3 | 8 | 588 | 2 | 0 | 2008-09-09T22:15:09.450000 | 2008-09-09T22:21:04.490000 |
53,041 | 53,050 | GUIDs in a SLN file | Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs Does anyone know for sure where these come from, and what they are used for? | Neither GUID is the same GUID as from AssemblyInfo.cs (that is the GUID for the assembly itself, not tied to Visual Studio but the end product of the build). So, for a typical line in the sln file (open the.sln in notepad or editor-of-choice if you wish to see this): Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" The second GUID is a unique GUID for the project itself. The solution file uses this to map other settings to that project: GlobalSection(ProjectConfigurationPlatforms) = postSolution {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.Build.0 = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.ActiveCfg = Release|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.Build.0 = Release|Any CPU EndGlobalSection The first GUID is actually a GUID that is the unique GUID for the solution itself (I believe). If you have a solution with more than one project, you'll actually see something like the following: Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Composite", "..\CompositeWPF\Source\CAL\Composite\Composite.csproj", "{77138947-1D13-4E22-AEE0-5D0DD046CA34}" EndProject | GUIDs in a SLN file Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs Does anyone know for sure where these come from, and what they are used for? | TITLE:
GUIDs in a SLN file
QUESTION:
Visual Studio Solution files contain two GUID's per project entry. I figure one of them is from the AssemblyInfo.cs Does anyone know for sure where these come from, and what they are used for?
ANSWER:
Neither GUID is the same GUID as from AssemblyInfo.cs (that is the GUID for the assembly itself, not tied to Visual Studio but the end product of the build). So, for a typical line in the sln file (open the.sln in notepad or editor-of-choice if you wish to see this): Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" The second GUID is a unique GUID for the project itself. The solution file uses this to map other settings to that project: GlobalSection(ProjectConfigurationPlatforms) = postSolution {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Debug|Any CPU.Build.0 = Debug|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.ActiveCfg = Release|Any CPU {55A1FD06-FB00-4F8A-9153-C432357F5CAC}.Release|Any CPU.Build.0 = Release|Any CPU EndGlobalSection The first GUID is actually a GUID that is the unique GUID for the solution itself (I believe). If you have a solution with more than one project, you'll actually see something like the following: Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ConsoleSandbox", "ConsoleSandbox\ConsoleSandbox.csproj", "{55A1FD06-FB00-4F8A-9153-C432357F5CAC}" EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Composite", "..\CompositeWPF\Source\CAL\Composite\Composite.csproj", "{77138947-1D13-4E22-AEE0-5D0DD046CA34}" EndProject | [
".net",
"visual-studio",
"solution"
] | 18 | 17 | 5,235 | 2 | 0 | 2008-09-09T22:23:36.573000 | 2008-09-09T22:29:22.287000 |
53,046 | 53,052 | How do you get double-underscores to display in markdown? | In python, there are some special variables and filenames that are surrounded by double-underscores. For example, there is the __file__ variable. I am only able to get them to show up correctly inside of a code block. What do I need to enter to get double underscores in regular text without having them interpreted as an emphasis? | __file__ Put a backslash before the first underscore. Like this: \__file__ | How do you get double-underscores to display in markdown? In python, there are some special variables and filenames that are surrounded by double-underscores. For example, there is the __file__ variable. I am only able to get them to show up correctly inside of a code block. What do I need to enter to get double underscores in regular text without having them interpreted as an emphasis? | TITLE:
How do you get double-underscores to display in markdown?
QUESTION:
In python, there are some special variables and filenames that are surrounded by double-underscores. For example, there is the __file__ variable. I am only able to get them to show up correctly inside of a code block. What do I need to enter to get double underscores in regular text without having them interpreted as an emphasis?
ANSWER:
__file__ Put a backslash before the first underscore. Like this: \__file__ | [
"markdown"
] | 35 | 37 | 17,946 | 8 | 0 | 2008-09-09T22:27:11.683000 | 2008-09-09T22:31:19.347000 |
53,057 | 53,061 | How to add a dll to gac in vista | When I drag & drop a dll to the assembly folder on vista, I get the error "Access is denied: mydll.dll". How can I bypass the error message and add my dll to gac? | My guess would be that you have to do it as an administrator...try either disabling UAC, or using gacutil.exe to add your assembly. | How to add a dll to gac in vista When I drag & drop a dll to the assembly folder on vista, I get the error "Access is denied: mydll.dll". How can I bypass the error message and add my dll to gac? | TITLE:
How to add a dll to gac in vista
QUESTION:
When I drag & drop a dll to the assembly folder on vista, I get the error "Access is denied: mydll.dll". How can I bypass the error message and add my dll to gac?
ANSWER:
My guess would be that you have to do it as an administrator...try either disabling UAC, or using gacutil.exe to add your assembly. | [
"dll",
"windows-vista",
"gac"
] | 6 | 5 | 15,844 | 6 | 0 | 2008-09-09T22:36:35.030000 | 2008-09-09T22:38:48.363000 |
53,064 | 53,131 | Embed asp page without iframe | I want to embed an.asp page on an html page. I cannot use an iframe. I tried: alt: url " works great in ff but not ie7. Any ideas? Is it possible to use the object tag to embed.asp pages for IE or does it only work in ff? | I've solved it in the past using Javascript and XMLHttp. It can get a bit hacky depending on the circumstances. In particular, you have to watch out for the inner page failing and how it affects/downgrades the outer one (hopefully you can keep it downgrading elegantly). Search for XMLHttp (or check this great tutorial ) and request the "child" page from the outer one, rendering the HTML you need. Preferably you can get just the specific data you need and process it in Javascript. | Embed asp page without iframe I want to embed an.asp page on an html page. I cannot use an iframe. I tried: alt: url " works great in ff but not ie7. Any ideas? Is it possible to use the object tag to embed.asp pages for IE or does it only work in ff? | TITLE:
Embed asp page without iframe
QUESTION:
I want to embed an.asp page on an html page. I cannot use an iframe. I tried: alt: url " works great in ff but not ie7. Any ideas? Is it possible to use the object tag to embed.asp pages for IE or does it only work in ff?
ANSWER:
I've solved it in the past using Javascript and XMLHttp. It can get a bit hacky depending on the circumstances. In particular, you have to watch out for the inner page failing and how it affects/downgrades the outer one (hopefully you can keep it downgrading elegantly). Search for XMLHttp (or check this great tutorial ) and request the "child" page from the outer one, rendering the HTML you need. Preferably you can get just the specific data you need and process it in Javascript. | [
"asp.net",
"iframe"
] | 2 | 1 | 4,453 | 3 | 0 | 2008-09-09T22:39:58.890000 | 2008-09-09T23:24:32.247000 |
53,065 | 53,087 | Enterprise Library Application Blocks OR Home Grown Framework? | We are currently looking to adopt some type of "standard" developer framework and have looked into using the Enterprise Library. Would you recommend using these blocks as the foundation for software development, or should we do something home grown? | Like all good answers to architecture and programming questions, the answer is "it depends". It depends on how unique your data access and object design needs are. It may also depend on how you plan on supporting your application in the long term. Finally, it greatly depends on the skill level of your developers. There isn't a one-size-fits-all answer to this question, but generally, if your main focus is on cranking out software that provides some business value, pick out an existing framework and run with it. Don't spend your cycles building something that won't immediately drive business profits (i.e. increases revenues and/or decreases costs). For example, one of my organization's projects is core to the operations of the company, needs to be developed and deployed as soon as possible, and will have a long life. For these reasons, we picked CSLA with some help from Enterprise Library. We could have picked other frameworks, but the important thing is that we picked a framework that seemed like it would fit well with our application and our developer skillset and we ran with it. It gave us a good headstart and a community from which we can get support. We immediately started with functionality that provided business value and were not banging our heads against the wall trying to build a framework. We are also in the position where we can hire people in the future who have most likely had exposure to our framework, giving them a really good headstart. This should reduce long-term support costs. Are there things we don't use and overhead that we may not need? Perhaps. But, I'll trade that all day long for delivering business value in code early and often. | Enterprise Library Application Blocks OR Home Grown Framework? We are currently looking to adopt some type of "standard" developer framework and have looked into using the Enterprise Library. Would you recommend using these blocks as the foundation for software development, or should we do something home grown? | TITLE:
Enterprise Library Application Blocks OR Home Grown Framework?
QUESTION:
We are currently looking to adopt some type of "standard" developer framework and have looked into using the Enterprise Library. Would you recommend using these blocks as the foundation for software development, or should we do something home grown?
ANSWER:
Like all good answers to architecture and programming questions, the answer is "it depends". It depends on how unique your data access and object design needs are. It may also depend on how you plan on supporting your application in the long term. Finally, it greatly depends on the skill level of your developers. There isn't a one-size-fits-all answer to this question, but generally, if your main focus is on cranking out software that provides some business value, pick out an existing framework and run with it. Don't spend your cycles building something that won't immediately drive business profits (i.e. increases revenues and/or decreases costs). For example, one of my organization's projects is core to the operations of the company, needs to be developed and deployed as soon as possible, and will have a long life. For these reasons, we picked CSLA with some help from Enterprise Library. We could have picked other frameworks, but the important thing is that we picked a framework that seemed like it would fit well with our application and our developer skillset and we ran with it. It gave us a good headstart and a community from which we can get support. We immediately started with functionality that provided business value and were not banging our heads against the wall trying to build a framework. We are also in the position where we can hire people in the future who have most likely had exposure to our framework, giving them a really good headstart. This should reduce long-term support costs. Are there things we don't use and overhead that we may not need? Perhaps. But, I'll trade that all day long for delivering business value in code early and often. | [
".net",
"frameworks",
"enterprise-library"
] | 2 | 3 | 898 | 2 | 0 | 2008-09-09T22:40:00.170000 | 2008-09-09T22:54:22.087000 |
53,070 | 53,183 | Examples for coding against the PayPal API in .NET 2.0+? | Can anyone point me to a good introduction to coding against the paypal API? | Found this article by Rick Strahl recently http://www.west-wind.com/presentations/PayPalIntegration/PayPalIntegration.asp. Have not implemeted anything from it yet, Rick has quite a few articles around the web on ecommerce in aspnet, and he seems to show up everytime I'm searching for it. | Examples for coding against the PayPal API in .NET 2.0+? Can anyone point me to a good introduction to coding against the paypal API? | TITLE:
Examples for coding against the PayPal API in .NET 2.0+?
QUESTION:
Can anyone point me to a good introduction to coding against the paypal API?
ANSWER:
Found this article by Rick Strahl recently http://www.west-wind.com/presentations/PayPalIntegration/PayPalIntegration.asp. Have not implemeted anything from it yet, Rick has quite a few articles around the web on ecommerce in aspnet, and he seems to show up everytime I'm searching for it. | [
".net",
"asp.net",
"paypal"
] | 8 | 5 | 6,138 | 3 | 0 | 2008-09-09T22:44:05.573000 | 2008-09-09T23:53:58.387000 |
53,081 | 53,103 | HTML meta keyword/description element, useful or not? | Does filling out HTML meta description/keyword tags matter for SEO? | This article has some info on it. A quick summary for keywords is: Google and Microsoft: No Yahoo and Ask: Yes Edit: As noted below, the meta description is used by Google to describe your site to potential visitors (although may not be used for ranking). | HTML meta keyword/description element, useful or not? Does filling out HTML meta description/keyword tags matter for SEO? | TITLE:
HTML meta keyword/description element, useful or not?
QUESTION:
Does filling out HTML meta description/keyword tags matter for SEO?
ANSWER:
This article has some info on it. A quick summary for keywords is: Google and Microsoft: No Yahoo and Ask: Yes Edit: As noted below, the meta description is used by Google to describe your site to potential visitors (although may not be used for ranking). | [
"html",
"seo"
] | 21 | 9 | 2,922 | 8 | 0 | 2008-09-09T22:51:25.947000 | 2008-09-09T23:05:13.010000 |
53,086 | 53,088 | Can I depend on the values of GetHashCode() to be consistent? | Is the return value of GetHashCode() guaranteed to be consistent assuming the same string value is being used? (C#/ASP.NET) I uploaded my code to a server today and to my surprise I had to reindex some data because my server (win2008 64-bit) was returning different values compared to my desktop computer. | If I'm not mistaken, GetHashCode is consistent given the same value, but it is NOT guaranteed to be consistent across different versions of the framework. From the MSDN docs on String.GetHashCode(): The behavior of GetHashCode is dependent on its implementation, which might change from one version of the common language runtime to another. A reason why this might happen is to improve the performance of GetHashCode. | Can I depend on the values of GetHashCode() to be consistent? Is the return value of GetHashCode() guaranteed to be consistent assuming the same string value is being used? (C#/ASP.NET) I uploaded my code to a server today and to my surprise I had to reindex some data because my server (win2008 64-bit) was returning different values compared to my desktop computer. | TITLE:
Can I depend on the values of GetHashCode() to be consistent?
QUESTION:
Is the return value of GetHashCode() guaranteed to be consistent assuming the same string value is being used? (C#/ASP.NET) I uploaded my code to a server today and to my surprise I had to reindex some data because my server (win2008 64-bit) was returning different values compared to my desktop computer.
ANSWER:
If I'm not mistaken, GetHashCode is consistent given the same value, but it is NOT guaranteed to be consistent across different versions of the framework. From the MSDN docs on String.GetHashCode(): The behavior of GetHashCode is dependent on its implementation, which might change from one version of the common language runtime to another. A reason why this might happen is to improve the performance of GetHashCode. | [
"c#",
"hash"
] | 21 | 35 | 7,749 | 9 | 0 | 2008-09-09T22:52:32.853000 | 2008-09-09T22:54:51.303000 |
53,102 | 53,118 | Why does Path.Combine not properly concatenate filenames that start with Path.DirectorySeparatorChar? | From the Immediate Window in Visual Studio: > Path.Combine(@"C:\x", "y") "C:\\x\\y" > Path.Combine(@"C:\x", @"\y") "\\y" It seems that they should both be the same. The old FileSystemObject.BuildPath() didn't work this way... | This is kind of a philosophical question (which perhaps only Microsoft can truly answer), since it's doing exactly what the documentation says. System.IO.Path.Combine "If path2 contains an absolute path, this method returns path2." Here's the actual Combine method from the.NET source. You can see that it calls CombineNoChecks, which then calls IsPathRooted on path2 and returns that path if so: public static String Combine(String path1, String path2) { if (path1==null || path2==null) throw new ArgumentNullException((path1==null)? "path1": "path2"); Contract.EndContractBlock(); CheckInvalidPathChars(path1); CheckInvalidPathChars(path2);
return CombineNoChecks(path1, path2); }
internal static string CombineNoChecks(string path1, string path2) { if (path2.Length == 0) return path1;
if (path1.Length == 0) return path2;
if (IsPathRooted(path2)) return path2;
char ch = path1[path1.Length - 1]; if (ch!= DirectorySeparatorChar && ch!= AltDirectorySeparatorChar && ch!= VolumeSeparatorChar) return path1 + DirectorySeparatorCharAsString + path2; return path1 + path2; } I don't know what the rationale is. I guess the solution is to strip off (or Trim) DirectorySeparatorChar from the beginning of the second path; maybe write your own Combine method that does that and then calls Path.Combine(). | Why does Path.Combine not properly concatenate filenames that start with Path.DirectorySeparatorChar? From the Immediate Window in Visual Studio: > Path.Combine(@"C:\x", "y") "C:\\x\\y" > Path.Combine(@"C:\x", @"\y") "\\y" It seems that they should both be the same. The old FileSystemObject.BuildPath() didn't work this way... | TITLE:
Why does Path.Combine not properly concatenate filenames that start with Path.DirectorySeparatorChar?
QUESTION:
From the Immediate Window in Visual Studio: > Path.Combine(@"C:\x", "y") "C:\\x\\y" > Path.Combine(@"C:\x", @"\y") "\\y" It seems that they should both be the same. The old FileSystemObject.BuildPath() didn't work this way...
ANSWER:
This is kind of a philosophical question (which perhaps only Microsoft can truly answer), since it's doing exactly what the documentation says. System.IO.Path.Combine "If path2 contains an absolute path, this method returns path2." Here's the actual Combine method from the.NET source. You can see that it calls CombineNoChecks, which then calls IsPathRooted on path2 and returns that path if so: public static String Combine(String path1, String path2) { if (path1==null || path2==null) throw new ArgumentNullException((path1==null)? "path1": "path2"); Contract.EndContractBlock(); CheckInvalidPathChars(path1); CheckInvalidPathChars(path2);
return CombineNoChecks(path1, path2); }
internal static string CombineNoChecks(string path1, string path2) { if (path2.Length == 0) return path1;
if (path1.Length == 0) return path2;
if (IsPathRooted(path2)) return path2;
char ch = path1[path1.Length - 1]; if (ch!= DirectorySeparatorChar && ch!= AltDirectorySeparatorChar && ch!= VolumeSeparatorChar) return path1 + DirectorySeparatorCharAsString + path2; return path1 + path2; } I don't know what the rationale is. I guess the solution is to strip off (or Trim) DirectorySeparatorChar from the beginning of the second path; maybe write your own Combine method that does that and then calls Path.Combine(). | [
"c#",
".net",
"file"
] | 228 | 242 | 132,877 | 16 | 0 | 2008-09-09T23:05:06.553000 | 2008-09-09T23:16:02.253000 |
53,108 | 54,362 | Is it possible to make a recursive SQL query? | I have a table similar to this: CREATE TABLE example ( id integer primary key, name char(200), parentid integer, value integer); I can use the parentid field to arrange data into a tree structure. Now here's the bit I can't work out. Given a parentid, is it possible to write an SQL statement to add up all the value fields under that parentid and recurse down the branch of the tree? UPDATE: I'm using posgreSQL so the fancy MS-SQL features are not available to me. In any case, I'd like this to be treated as a generic SQL question. | There are a few ways to do what you need in PostgreSQL. If you can install modules, look at the tablefunc contrib. It has a connectby() function that handles traversing trees. http://www.postgresql.org/docs/8.3/interactive/tablefunc.html Also check out the ltree contrib, which you could adapt your table to use: http://www.postgresql.org/docs/8.3/interactive/ltree.html Or you can traverse the tree yourself with a PL/PGSQL function. Something like this: create or replace function example_subtree (integer) returns setof example as 'declare results record; child record; begin select into results * from example where parent_id = $1; if found then return next results; for child in select id from example where parent_id = $1 loop for temp in select * from example_subtree(child.id) loop return next temp; end loop; end loop; end if; return null; end;' language 'plpgsql';
select sum(value) as value_sum from example_subtree(1234); | Is it possible to make a recursive SQL query? I have a table similar to this: CREATE TABLE example ( id integer primary key, name char(200), parentid integer, value integer); I can use the parentid field to arrange data into a tree structure. Now here's the bit I can't work out. Given a parentid, is it possible to write an SQL statement to add up all the value fields under that parentid and recurse down the branch of the tree? UPDATE: I'm using posgreSQL so the fancy MS-SQL features are not available to me. In any case, I'd like this to be treated as a generic SQL question. | TITLE:
Is it possible to make a recursive SQL query?
QUESTION:
I have a table similar to this: CREATE TABLE example ( id integer primary key, name char(200), parentid integer, value integer); I can use the parentid field to arrange data into a tree structure. Now here's the bit I can't work out. Given a parentid, is it possible to write an SQL statement to add up all the value fields under that parentid and recurse down the branch of the tree? UPDATE: I'm using posgreSQL so the fancy MS-SQL features are not available to me. In any case, I'd like this to be treated as a generic SQL question.
ANSWER:
There are a few ways to do what you need in PostgreSQL. If you can install modules, look at the tablefunc contrib. It has a connectby() function that handles traversing trees. http://www.postgresql.org/docs/8.3/interactive/tablefunc.html Also check out the ltree contrib, which you could adapt your table to use: http://www.postgresql.org/docs/8.3/interactive/ltree.html Or you can traverse the tree yourself with a PL/PGSQL function. Something like this: create or replace function example_subtree (integer) returns setof example as 'declare results record; child record; begin select into results * from example where parent_id = $1; if found then return next results; for child in select id from example where parent_id = $1 loop for temp in select * from example_subtree(child.id) loop return next temp; end loop; end loop; end if; return null; end;' language 'plpgsql';
select sum(value) as value_sum from example_subtree(1234); | [
"sql",
"postgresql",
"recursive-query"
] | 63 | 12 | 57,348 | 11 | 0 | 2008-09-09T23:08:24.983000 | 2008-09-10T15:16:56.747000 |
53,112 | 53,126 | Good Ways to Use Source Control and an IDE for Plugin Code? | What are good ways of dealing with the issues surrounding plugin code that interacts with outside system? To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server. I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE? I am making the assumption here that all the code for the plugin is located under a single directory. Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory? | Short answer - I do have my development and production servers check out the appropriate directories directly from SVN. For your example: Develop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test. Once you're ready for production, merge the change into the production branch, and do an svn update on the production webserver. | Good Ways to Use Source Control and an IDE for Plugin Code? What are good ways of dealing with the issues surrounding plugin code that interacts with outside system? To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server. I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE? I am making the assumption here that all the code for the plugin is located under a single directory. Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory? | TITLE:
Good Ways to Use Source Control and an IDE for Plugin Code?
QUESTION:
What are good ways of dealing with the issues surrounding plugin code that interacts with outside system? To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server. I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE? I am making the assumption here that all the code for the plugin is located under a single directory. Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory?
ANSWER:
Short answer - I do have my development and production servers check out the appropriate directories directly from SVN. For your example: Develop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test. Once you're ready for production, merge the change into the production branch, and do an svn update on the production webserver. | [
"plugins",
"ide"
] | 0 | 1 | 132 | 3 | 0 | 2008-09-09T23:09:10.367000 | 2008-09-09T23:20:47.827000 |
53,128 | 4,121,987 | Java ConnectionPool connection not closing, stuck in 'sleep' | I have a webapp that uses JNDI lookups to get a connection to the database. The connection works fine and returns the query no problems. The issue us that the connection does not close properly and is stuck in the 'sleep' mode (according to mysql administrator). This means that they become unusable nad then I run out of connections. Can someone give me a few pointers as to what I can do to make the connection return to the pool successfully. public class DatabaseBean {
private static final Logger logger = Logger.getLogger(DatabaseBean.class);
private Connection conn; private PreparedStatement prepStmt;
/** * Zero argument constructor * Setup generic databse connection in here to avoid redundancy * The connection details are in /META-INF/context.xml */ public DatabaseBean() { try { InitialContext initContext = new InitialContext(); DataSource ds = (DataSource) initContext.lookup("java:/comp/env/jdbc/mysite"); conn = ds.getConnection(); } catch (SQLException SQLEx) { logger.fatal("There was a problem with the database connection."); logger.fatal(SQLEx); logger.fatal(SQLEx.getCause()); } catch (NamingException nameEx) { logger.fatal("There was a naming exception"); logger.fatal(nameEx); logger.fatal(nameEx.getCause()); } }
/** * Execute a query. Do not use for statements (update delete insert etc). * * @return A ResultSet of the execute query. A set of size zero if no results were returned. It is never null. * @see #executeUpdate() for running update, insert delete etc. */
public ResultSet executeQuery() { ResultSet result = null; try { result = prepStmt.executeQuery(); logger.debug(prepStmt.toString()); } catch (SQLException SQLEx) { logger.fatal("There was an error running a query"); logger.fatal(SQLEx); } return result; } SNIP public void close() { try { prepStmt.close(); prepStmt = null;
conn.close(); conn = null; } catch (SQLException SQLEx) { logger.warn("There was an error closing the database connection."); } } } This is inside a javabean that uses the database connection. public LinkedList getImportantNotices() {
DatabaseBean noticesDBBean = new DatabaseBean(); LinkedList listOfNotices = new LinkedList ();
try { PreparedStatement preStmt = noticesDBBean.getConn().prepareStatement("SELECT pseudonym, message, date_to, date_from " + "FROM importantnotices, users " + "WHERE importantnotices.username = users.username " + "AND NOW() >= date_from AND NOW() <= date_to;");
noticesDBBean.setPrepStmt(preStmt); ResultSet result = noticesDBBean.executeQuery();
while (result.next()) { ImportantNoticeBean noticeBean = new ImportantNoticeBean();
noticeBean.setAuthor(result.getString("pseudonym")); noticeBean.setMessage(result.getString("message")); noticeBean.setDateTo(result.getDate("date_to")); noticeBean.setDateFrom(result.getDate("date_from"));
listOfNotices.add(noticeBean); }
result.close();
} catch (SQLException SQLEx) { logger.error("There was an error in ImportantNoticesBean.getImportantNotices()"); logger.error(SQLEx); } finally { noticesDBBean.close(); } return listOfNotices; } | The issue us that the connection does not close properly and is stuck in the 'sleep' mode This was actually only half right. The problem I ran into was actually that each app was defining a new connection to the database sever. So each time I closed all the connections App A would make a bunch of new connections as per it's WEB.xml config file and run happily. App B would do the same. The problem is that they are independent pools which try to grab up to the server defined limit. It is a kind of race condition I guess. So when App A has finished with the connections it sits waiting to to use them again until the timeout has passed while App B who needs the connection now is denied the resources even though App A has finished with the and should be back in the pool. Once the timeout has passed, the connection is freed up and B (or C etc) can get at it again. e.g. if the limit is 10 (mySQL profile limit) and each app has been configured to use a max of 10 the there will be 20 attempts at connections. Obviously this is a bad situation. The solution is to RTFM and put the connection details in the right place. This does make shared posting a pain but there are ways around it (such as linking to other xml files from the context). Just to be explicit: I put the connection details in the WEB.xml for each app and the had a fight about it. | Java ConnectionPool connection not closing, stuck in 'sleep' I have a webapp that uses JNDI lookups to get a connection to the database. The connection works fine and returns the query no problems. The issue us that the connection does not close properly and is stuck in the 'sleep' mode (according to mysql administrator). This means that they become unusable nad then I run out of connections. Can someone give me a few pointers as to what I can do to make the connection return to the pool successfully. public class DatabaseBean {
private static final Logger logger = Logger.getLogger(DatabaseBean.class);
private Connection conn; private PreparedStatement prepStmt;
/** * Zero argument constructor * Setup generic databse connection in here to avoid redundancy * The connection details are in /META-INF/context.xml */ public DatabaseBean() { try { InitialContext initContext = new InitialContext(); DataSource ds = (DataSource) initContext.lookup("java:/comp/env/jdbc/mysite"); conn = ds.getConnection(); } catch (SQLException SQLEx) { logger.fatal("There was a problem with the database connection."); logger.fatal(SQLEx); logger.fatal(SQLEx.getCause()); } catch (NamingException nameEx) { logger.fatal("There was a naming exception"); logger.fatal(nameEx); logger.fatal(nameEx.getCause()); } }
/** * Execute a query. Do not use for statements (update delete insert etc). * * @return A ResultSet of the execute query. A set of size zero if no results were returned. It is never null. * @see #executeUpdate() for running update, insert delete etc. */
public ResultSet executeQuery() { ResultSet result = null; try { result = prepStmt.executeQuery(); logger.debug(prepStmt.toString()); } catch (SQLException SQLEx) { logger.fatal("There was an error running a query"); logger.fatal(SQLEx); } return result; } SNIP public void close() { try { prepStmt.close(); prepStmt = null;
conn.close(); conn = null; } catch (SQLException SQLEx) { logger.warn("There was an error closing the database connection."); } } } This is inside a javabean that uses the database connection. public LinkedList getImportantNotices() {
DatabaseBean noticesDBBean = new DatabaseBean(); LinkedList listOfNotices = new LinkedList ();
try { PreparedStatement preStmt = noticesDBBean.getConn().prepareStatement("SELECT pseudonym, message, date_to, date_from " + "FROM importantnotices, users " + "WHERE importantnotices.username = users.username " + "AND NOW() >= date_from AND NOW() <= date_to;");
noticesDBBean.setPrepStmt(preStmt); ResultSet result = noticesDBBean.executeQuery();
while (result.next()) { ImportantNoticeBean noticeBean = new ImportantNoticeBean();
noticeBean.setAuthor(result.getString("pseudonym")); noticeBean.setMessage(result.getString("message")); noticeBean.setDateTo(result.getDate("date_to")); noticeBean.setDateFrom(result.getDate("date_from"));
listOfNotices.add(noticeBean); }
result.close();
} catch (SQLException SQLEx) { logger.error("There was an error in ImportantNoticesBean.getImportantNotices()"); logger.error(SQLEx); } finally { noticesDBBean.close(); } return listOfNotices; } | TITLE:
Java ConnectionPool connection not closing, stuck in 'sleep'
QUESTION:
I have a webapp that uses JNDI lookups to get a connection to the database. The connection works fine and returns the query no problems. The issue us that the connection does not close properly and is stuck in the 'sleep' mode (according to mysql administrator). This means that they become unusable nad then I run out of connections. Can someone give me a few pointers as to what I can do to make the connection return to the pool successfully. public class DatabaseBean {
private static final Logger logger = Logger.getLogger(DatabaseBean.class);
private Connection conn; private PreparedStatement prepStmt;
/** * Zero argument constructor * Setup generic databse connection in here to avoid redundancy * The connection details are in /META-INF/context.xml */ public DatabaseBean() { try { InitialContext initContext = new InitialContext(); DataSource ds = (DataSource) initContext.lookup("java:/comp/env/jdbc/mysite"); conn = ds.getConnection(); } catch (SQLException SQLEx) { logger.fatal("There was a problem with the database connection."); logger.fatal(SQLEx); logger.fatal(SQLEx.getCause()); } catch (NamingException nameEx) { logger.fatal("There was a naming exception"); logger.fatal(nameEx); logger.fatal(nameEx.getCause()); } }
/** * Execute a query. Do not use for statements (update delete insert etc). * * @return A ResultSet of the execute query. A set of size zero if no results were returned. It is never null. * @see #executeUpdate() for running update, insert delete etc. */
public ResultSet executeQuery() { ResultSet result = null; try { result = prepStmt.executeQuery(); logger.debug(prepStmt.toString()); } catch (SQLException SQLEx) { logger.fatal("There was an error running a query"); logger.fatal(SQLEx); } return result; } SNIP public void close() { try { prepStmt.close(); prepStmt = null;
conn.close(); conn = null; } catch (SQLException SQLEx) { logger.warn("There was an error closing the database connection."); } } } This is inside a javabean that uses the database connection. public LinkedList getImportantNotices() {
DatabaseBean noticesDBBean = new DatabaseBean(); LinkedList listOfNotices = new LinkedList ();
try { PreparedStatement preStmt = noticesDBBean.getConn().prepareStatement("SELECT pseudonym, message, date_to, date_from " + "FROM importantnotices, users " + "WHERE importantnotices.username = users.username " + "AND NOW() >= date_from AND NOW() <= date_to;");
noticesDBBean.setPrepStmt(preStmt); ResultSet result = noticesDBBean.executeQuery();
while (result.next()) { ImportantNoticeBean noticeBean = new ImportantNoticeBean();
noticeBean.setAuthor(result.getString("pseudonym")); noticeBean.setMessage(result.getString("message")); noticeBean.setDateTo(result.getDate("date_to")); noticeBean.setDateFrom(result.getDate("date_from"));
listOfNotices.add(noticeBean); }
result.close();
} catch (SQLException SQLEx) { logger.error("There was an error in ImportantNoticesBean.getImportantNotices()"); logger.error(SQLEx); } finally { noticesDBBean.close(); } return listOfNotices; }
ANSWER:
The issue us that the connection does not close properly and is stuck in the 'sleep' mode This was actually only half right. The problem I ran into was actually that each app was defining a new connection to the database sever. So each time I closed all the connections App A would make a bunch of new connections as per it's WEB.xml config file and run happily. App B would do the same. The problem is that they are independent pools which try to grab up to the server defined limit. It is a kind of race condition I guess. So when App A has finished with the connections it sits waiting to to use them again until the timeout has passed while App B who needs the connection now is denied the resources even though App A has finished with the and should be back in the pool. Once the timeout has passed, the connection is freed up and B (or C etc) can get at it again. e.g. if the limit is 10 (mySQL profile limit) and each app has been configured to use a max of 10 the there will be 20 attempts at connections. Obviously this is a bad situation. The solution is to RTFM and put the connection details in the right place. This does make shared posting a pain but there are ways around it (such as linking to other xml files from the context). Just to be explicit: I put the connection details in the WEB.xml for each app and the had a fight about it. | [
"java",
"database",
"tomcat"
] | 4 | 1 | 24,463 | 6 | 0 | 2008-09-09T23:22:23.483000 | 2010-11-08T07:31:07.880000 |
53,135 | 53,181 | What registry access can you get without Administrator privileges? | I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use? Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE ) without Administrator privileges? | In general, a non-administrator user has this access to the registry: Read/Write to: HKEY_CURRENT_USER Read Only: HKEY_LOCAL_MACHINE HKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\Software\Classes ) It is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that. For your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER. The canonical place is anywhere within HKEY_CURRENT_USER\Software\YourCompany\YourProduct\ You could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE. It is very rare to need to do this, and you should avoid it. The problem is that any user can "read" those, but only an administrator (or by extension, your setup/install program) can set them. Other common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question. Finally, your application should stay out of HKEY_CURRENT_CONFIG. This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere. | What registry access can you get without Administrator privileges? I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use? Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE ) without Administrator privileges? | TITLE:
What registry access can you get without Administrator privileges?
QUESTION:
I know that we shouldn't being using the registry to store Application Data anymore, but in updating a Legacy application (and wanting to do the fewest changes), what Registry Hives are non-administrators allowed to use? Can I access all of HKEY_CURRENT_USER (the application currently access HKEY_LOCAL_MACHINE ) without Administrator privileges?
ANSWER:
In general, a non-administrator user has this access to the registry: Read/Write to: HKEY_CURRENT_USER Read Only: HKEY_LOCAL_MACHINE HKEY_CLASSES_ROOT (which is just a link to HKEY_LOCAL_MACHINE\Software\Classes ) It is possible to change some of these permissions on a key-by-key basis, but it's extremely rare. You should not have to worry about that. For your purposes, your application should be writing settings and configuration to HKEY_CURRENT_USER. The canonical place is anywhere within HKEY_CURRENT_USER\Software\YourCompany\YourProduct\ You could potentially hold settings that are global (for all users) in HKEY_LOCAL_MACHINE. It is very rare to need to do this, and you should avoid it. The problem is that any user can "read" those, but only an administrator (or by extension, your setup/install program) can set them. Other common source of trouble: your application should not write to anything in the Program files or the Windows directories. If you need to write to files, there are several options at hand; describing all of them would be a longer discussion. All of the options end up writing to a subfolder or another under %USERPROFILE% for the user in question. Finally, your application should stay out of HKEY_CURRENT_CONFIG. This hive holds hardware configuration, services configurations and other items that 99.9999% of applications should not need to look at (for example, it holds the current plug-and-play device list). If you need anything from there, most of the information is available through supported APIs elsewhere. | [
"registry",
"privileges",
"administrator"
] | 71 | 111 | 57,859 | 2 | 0 | 2008-09-09T23:29:25.167000 | 2008-09-09T23:53:12.023000 |
53,136 | 53,287 | How to find all database references | In trying to figure out this problem (which is still unsolved and I still have no clue what is going on), I am wondering if maybe an external reference to the table in question is causing the problem. For example, a trigger or view or some other such thing. Is there an easy way to find all references to a given database table? Including all views, triggers, constraints, or anything at all, preferably from the command line, and also preferably without a 3rd party tool (we are using db2). | Wow, I wouldn't have thought it, but there seems to be.. Good ole DB2. I find the publib db2 docs view very very handy by the way: http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp I just found the "SYSCAT.TABDEP" catalog view in it, which seems to contain more or less what you asked for. I suspect for anything not covered there you'll have to trawl through the rest of the syscat tables which are vast. (Unfortunately I can't seem to link you to the exact page on SYSCAT.TABDEP itself, the search facility should lead you to it fairly easily though). Most databases these days have a set of tables which contain data about the layout of your actual schema tables, quite handy for this sort of thing. | How to find all database references In trying to figure out this problem (which is still unsolved and I still have no clue what is going on), I am wondering if maybe an external reference to the table in question is causing the problem. For example, a trigger or view or some other such thing. Is there an easy way to find all references to a given database table? Including all views, triggers, constraints, or anything at all, preferably from the command line, and also preferably without a 3rd party tool (we are using db2). | TITLE:
How to find all database references
QUESTION:
In trying to figure out this problem (which is still unsolved and I still have no clue what is going on), I am wondering if maybe an external reference to the table in question is causing the problem. For example, a trigger or view or some other such thing. Is there an easy way to find all references to a given database table? Including all views, triggers, constraints, or anything at all, preferably from the command line, and also preferably without a 3rd party tool (we are using db2).
ANSWER:
Wow, I wouldn't have thought it, but there seems to be.. Good ole DB2. I find the publib db2 docs view very very handy by the way: http://publib.boulder.ibm.com/infocenter/db2luw/v8//index.jsp I just found the "SYSCAT.TABDEP" catalog view in it, which seems to contain more or less what you asked for. I suspect for anything not covered there you'll have to trawl through the rest of the syscat tables which are vast. (Unfortunately I can't seem to link you to the exact page on SYSCAT.TABDEP itself, the search facility should lead you to it fairly easily though). Most databases these days have a set of tables which contain data about the layout of your actual schema tables, quite handy for this sort of thing. | [
"database",
"db2"
] | 1 | 3 | 2,018 | 2 | 0 | 2008-09-09T23:30:09.923000 | 2008-09-10T01:34:34.010000 |
53,161 | 53,175 | Find the highest order bit in C | what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input) 1 -> 1 2 -> 2 3 -> 2 4 -> 4 5 -> 4 6 -> 4 7 -> 4 8 -> 8 9 -> 8... 63 -> 32 | This should do the trick. int hob (int num) { if (!num) return 0;
int ret = 1;
while (num >>= 1) ret <<= 1;
return ret; } hob(1234) returns 1024 hob(1024) returns 1024 hob(1023) returns 512 | Find the highest order bit in C what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input) 1 -> 1 2 -> 2 3 -> 2 4 -> 4 5 -> 4 6 -> 4 7 -> 4 8 -> 8 9 -> 8... 63 -> 32 | TITLE:
Find the highest order bit in C
QUESTION:
what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input) 1 -> 1 2 -> 2 3 -> 2 4 -> 4 5 -> 4 6 -> 4 7 -> 4 8 -> 8 9 -> 8... 63 -> 32
ANSWER:
This should do the trick. int hob (int num) { if (!num) return 0;
int ret = 1;
while (num >>= 1) ret <<= 1;
return ret; } hob(1234) returns 1024 hob(1024) returns 1024 hob(1023) returns 512 | [
"c"
] | 47 | 35 | 62,888 | 13 | 0 | 2008-09-09T23:44:33.917000 | 2008-09-09T23:49:02.267000 |
53,162 | 53,180 | How can I do a line break (line continuation) in Python (split up a long line of source code)? | Given: e = 'a' + 'b' + 'c' + 'd' How do I write the above in two lines? e = 'a' + 'b' + 'c' + 'd' | What is the line? You can just have arguments on the next line without any problems: a = dostuff(blahblah1, blahblah2, blahblah3, blahblah4, blahblah5, blahblah6, blahblah7) Otherwise you can do something like this: if (a == True and b == False): or with explicit line break: if a == True and \ b == False: Check the style guide for more information. Using parentheses, your example can be written over multiple lines: a = ('1' + '2' + '3' + '4' + '5') The same effect can be obtained using explicit line break: a = '1' + '2' + '3' + \ '4' + '5' Note that the style guide says that using the implicit continuation with parentheses is preferred, but in this particular case just adding parentheses around your expression is probably the wrong way to go. | How can I do a line break (line continuation) in Python (split up a long line of source code)? Given: e = 'a' + 'b' + 'c' + 'd' How do I write the above in two lines? e = 'a' + 'b' + 'c' + 'd' | TITLE:
How can I do a line break (line continuation) in Python (split up a long line of source code)?
QUESTION:
Given: e = 'a' + 'b' + 'c' + 'd' How do I write the above in two lines? e = 'a' + 'b' + 'c' + 'd'
ANSWER:
What is the line? You can just have arguments on the next line without any problems: a = dostuff(blahblah1, blahblah2, blahblah3, blahblah4, blahblah5, blahblah6, blahblah7) Otherwise you can do something like this: if (a == True and b == False): or with explicit line break: if a == True and \ b == False: Check the style guide for more information. Using parentheses, your example can be written over multiple lines: a = ('1' + '2' + '3' + '4' + '5') The same effect can be obtained using explicit line break: a = '1' + '2' + '3' + \ '4' + '5' Note that the style guide says that using the implicit continuation with parentheses is preferred, but in this particular case just adding parentheses around your expression is probably the wrong way to go. | [
"python",
"syntax",
"line-breaks",
"long-lines"
] | 1,372 | 1,547 | 2,499,320 | 10 | 0 | 2008-09-09T23:45:30.293000 | 2008-09-09T23:52:05.643000 |
53,164 | 53,210 | Fixed vs. variable frame rates in games: what is best, and when? | After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes). Which method works best for specific situations? Please consider: Catering to different system specifications; Ease of development/maintenance; Ease of porting; Final performance. | It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance. At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast) Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too) The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok) Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game) If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates. If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models. | Fixed vs. variable frame rates in games: what is best, and when? After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes). Which method works best for specific situations? Please consider: Catering to different system specifications; Ease of development/maintenance; Ease of porting; Final performance. | TITLE:
Fixed vs. variable frame rates in games: what is best, and when?
QUESTION:
After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes). Which method works best for specific situations? Please consider: Catering to different system specifications; Ease of development/maintenance; Ease of porting; Final performance.
ANSWER:
It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance. At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast) Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too) The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok) Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game) If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates. If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models. | [
"frame-rate"
] | 8 | 3 | 15,411 | 5 | 0 | 2008-09-09T23:45:56.330000 | 2008-09-10T00:17:01.577000 |
53,178 | 53,189 | Prompt for Database Connection String | I would like to offer a database connection prompt to the user. I can build my own, but it would be nice if I can use something that somebody else has already built (maybe something built into Windows or a free library available on the Internet). Anybody know how to do this in.Net? EDIT: I found this and thought it was interesting: Showing a Connection String prompt in a WinForm application. This only works for SQL Server connections though. | You might want to try using SQL Server Management Objects. This MSDN article has a good sample for prompting and connecting to a SQL server. | Prompt for Database Connection String I would like to offer a database connection prompt to the user. I can build my own, but it would be nice if I can use something that somebody else has already built (maybe something built into Windows or a free library available on the Internet). Anybody know how to do this in.Net? EDIT: I found this and thought it was interesting: Showing a Connection String prompt in a WinForm application. This only works for SQL Server connections though. | TITLE:
Prompt for Database Connection String
QUESTION:
I would like to offer a database connection prompt to the user. I can build my own, but it would be nice if I can use something that somebody else has already built (maybe something built into Windows or a free library available on the Internet). Anybody know how to do this in.Net? EDIT: I found this and thought it was interesting: Showing a Connection String prompt in a WinForm application. This only works for SQL Server connections though.
ANSWER:
You might want to try using SQL Server Management Objects. This MSDN article has a good sample for prompting and connecting to a SQL server. | [
".net",
"sql-server",
"database",
"connection-string"
] | 8 | 15 | 2,545 | 5 | 0 | 2008-09-09T23:51:03.027000 | 2008-09-09T23:56:59.797000 |
53,198 | 55,500 | HelpInsight documentation in Delphi 2007 | I am using D2007 and am trying to document my source code, using the HelpInsight feature (provided since D2005). I am mainly interested in getting the HelpInsight tool-tips working. From various Web-surfing and experimentation I have found the following: Using the triple slash (///) comment style works more often than the other documented comment styles. i.e.: {*! comment *} and {! comment } The comments must precede the declaration that they are for. For most cases this will mean placing them in the interface section of the code. (The obvious exception is for types and functions that are not accessible from outside the current unit and are therefore declared in the implementation block.) The first comment cannot be for a function. (i.e. it must be for a type - or at least it appears the parser must have seen the "type" keyword before the HelpInsight feature works) Despite following these "rules", sometimes the Help-insight just doesn't find the comments I've written. One file does not produce the correct HelpInsight tool-tips, but if I include this file in a different dummy project, it works properly. Does anyone have any other pointers / tricks for getting HelpInsight to work? | I have discovered another caveat (which in my case was what was "wrong") It appears that the unit with the HelpInsight comments must be explicitly added to the project. It is not sufficient to simply have the unit in a path that is searched when compiling the project. In other words, the unit must be included in the Project's.dpr /.dproj file. (Using the Project | "Add to Project" menu option) | HelpInsight documentation in Delphi 2007 I am using D2007 and am trying to document my source code, using the HelpInsight feature (provided since D2005). I am mainly interested in getting the HelpInsight tool-tips working. From various Web-surfing and experimentation I have found the following: Using the triple slash (///) comment style works more often than the other documented comment styles. i.e.: {*! comment *} and {! comment } The comments must precede the declaration that they are for. For most cases this will mean placing them in the interface section of the code. (The obvious exception is for types and functions that are not accessible from outside the current unit and are therefore declared in the implementation block.) The first comment cannot be for a function. (i.e. it must be for a type - or at least it appears the parser must have seen the "type" keyword before the HelpInsight feature works) Despite following these "rules", sometimes the Help-insight just doesn't find the comments I've written. One file does not produce the correct HelpInsight tool-tips, but if I include this file in a different dummy project, it works properly. Does anyone have any other pointers / tricks for getting HelpInsight to work? | TITLE:
HelpInsight documentation in Delphi 2007
QUESTION:
I am using D2007 and am trying to document my source code, using the HelpInsight feature (provided since D2005). I am mainly interested in getting the HelpInsight tool-tips working. From various Web-surfing and experimentation I have found the following: Using the triple slash (///) comment style works more often than the other documented comment styles. i.e.: {*! comment *} and {! comment } The comments must precede the declaration that they are for. For most cases this will mean placing them in the interface section of the code. (The obvious exception is for types and functions that are not accessible from outside the current unit and are therefore declared in the implementation block.) The first comment cannot be for a function. (i.e. it must be for a type - or at least it appears the parser must have seen the "type" keyword before the HelpInsight feature works) Despite following these "rules", sometimes the Help-insight just doesn't find the comments I've written. One file does not produce the correct HelpInsight tool-tips, but if I include this file in a different dummy project, it works properly. Does anyone have any other pointers / tricks for getting HelpInsight to work?
ANSWER:
I have discovered another caveat (which in my case was what was "wrong") It appears that the unit with the HelpInsight comments must be explicitly added to the project. It is not sufficient to simply have the unit in a path that is searched when compiling the project. In other words, the unit must be included in the Project's.dpr /.dproj file. (Using the Project | "Add to Project" menu option) | [
"delphi",
"documentation",
"ndoc"
] | 9 | 4 | 922 | 1 | 0 | 2008-09-10T00:04:15.943000 | 2008-09-10T23:38:48.943000 |
53,208 | 53,214 | How do I automatically destroy child processes in Windows? | In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this. I want the child processes to be automatically closed if my main processes crashes or is closed. Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up. How do I do this? | The Windows API supports objects called "Job Objects". The following code will create a "job" that is configured to shut down all processes when the main application ends (when its handles are cleaned up). This code should only be run once.: HANDLE ghJob = CreateJobObject( NULL, NULL); // GLOBAL if( ghJob == NULL) {::MessageBox( 0, "Could not create job object", "TEST", MB_OK); } else { JOBOBJECT_EXTENDED_LIMIT_INFORMATION jeli = { 0 };
// Configure all child processes associated with the job to terminate when the jeli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE; if( 0 == SetInformationJobObject( ghJob, JobObjectExtendedLimitInformation, &jeli, sizeof(jeli))) {::MessageBox( 0, "Could not SetInformationJobObject", "TEST", MB_OK); } } Then when each child process is created, execute the following code to launch each child each process and add it to the job object: STARTUPINFO info={sizeof(info)}; PROCESS_INFORMATION processInfo;
// Launch child process - example is notepad.exe if (::CreateProcess( NULL, "notepad.exe", NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo)) {::MessageBox( 0, "CreateProcess succeeded.", "TEST", MB_OK); if(ghJob) { if(0 == AssignProcessToJobObject( ghJob, processInfo.hProcess)) {::MessageBox( 0, "Could not AssignProcessToObject", "TEST", MB_OK); } }
// Can we free handles now? Not sure about this. //CloseHandle(processInfo.hProcess); CloseHandle(processInfo.hThread); } VISTA NOTE: See AssignProcessToJobObject always return "access denied" on Vista if you encounter access-denied issues with AssignProcessToObject() on vista. | How do I automatically destroy child processes in Windows? In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this. I want the child processes to be automatically closed if my main processes crashes or is closed. Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up. How do I do this? | TITLE:
How do I automatically destroy child processes in Windows?
QUESTION:
In C++ Windows app, I launch several long running child processes (currently I use CreateProcess(...) to do this. I want the child processes to be automatically closed if my main processes crashes or is closed. Because of the requirement that this needs to work for a crash of the "parent", I believe this would need to be done using some API/feature of the operating system. So that all the "child" processes are cleaned up. How do I do this?
ANSWER:
The Windows API supports objects called "Job Objects". The following code will create a "job" that is configured to shut down all processes when the main application ends (when its handles are cleaned up). This code should only be run once.: HANDLE ghJob = CreateJobObject( NULL, NULL); // GLOBAL if( ghJob == NULL) {::MessageBox( 0, "Could not create job object", "TEST", MB_OK); } else { JOBOBJECT_EXTENDED_LIMIT_INFORMATION jeli = { 0 };
// Configure all child processes associated with the job to terminate when the jeli.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE; if( 0 == SetInformationJobObject( ghJob, JobObjectExtendedLimitInformation, &jeli, sizeof(jeli))) {::MessageBox( 0, "Could not SetInformationJobObject", "TEST", MB_OK); } } Then when each child process is created, execute the following code to launch each child each process and add it to the job object: STARTUPINFO info={sizeof(info)}; PROCESS_INFORMATION processInfo;
// Launch child process - example is notepad.exe if (::CreateProcess( NULL, "notepad.exe", NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo)) {::MessageBox( 0, "CreateProcess succeeded.", "TEST", MB_OK); if(ghJob) { if(0 == AssignProcessToJobObject( ghJob, processInfo.hProcess)) {::MessageBox( 0, "Could not AssignProcessToObject", "TEST", MB_OK); } }
// Can we free handles now? Not sure about this. //CloseHandle(processInfo.hProcess); CloseHandle(processInfo.hThread); } VISTA NOTE: See AssignProcessToJobObject always return "access denied" on Vista if you encounter access-denied issues with AssignProcessToObject() on vista. | [
"windows",
"process"
] | 71 | 87 | 30,901 | 7 | 0 | 2008-09-10T00:13:35.560000 | 2008-09-10T00:22:49.550000 |
53,220 | 53,239 | Is there an ASP.NET pagination control (Not MVC)? | I've got a search results page that basically consists of a repeater with content in it. What I need is a way to paginate the results. Getting paginated results isn't the problem, what I'm after is a web control that will display a list of the available paged data, preferably by providing the number of results and a page size | Repeaters don't do this by default. However, GridViews do. Personally, I hate GridViews, so I wrote a Paging/Sorting Repeater control. Basic Steps: Subclass the Repeater Control Add a private PagedDataSource to it Add a public PageSize property Override Control.DataBind Store the Control.DataSource in the PagedDataSource. Bind the Control.DataSource to PagedDataSource Override Control.Render Call Base.Render() Render your paging links. For a walkthrough, you could try this link: https://web.archive.org/web/20210925054103/http://aspnet.4guysfromrolla.com/articles/081804-1.aspx | Is there an ASP.NET pagination control (Not MVC)? I've got a search results page that basically consists of a repeater with content in it. What I need is a way to paginate the results. Getting paginated results isn't the problem, what I'm after is a web control that will display a list of the available paged data, preferably by providing the number of results and a page size | TITLE:
Is there an ASP.NET pagination control (Not MVC)?
QUESTION:
I've got a search results page that basically consists of a repeater with content in it. What I need is a way to paginate the results. Getting paginated results isn't the problem, what I'm after is a web control that will display a list of the available paged data, preferably by providing the number of results and a page size
ANSWER:
Repeaters don't do this by default. However, GridViews do. Personally, I hate GridViews, so I wrote a Paging/Sorting Repeater control. Basic Steps: Subclass the Repeater Control Add a private PagedDataSource to it Add a public PageSize property Override Control.DataBind Store the Control.DataSource in the PagedDataSource. Bind the Control.DataSource to PagedDataSource Override Control.Render Call Base.Render() Render your paging links. For a walkthrough, you could try this link: https://web.archive.org/web/20210925054103/http://aspnet.4guysfromrolla.com/articles/081804-1.aspx | [
"asp.net",
"pagination"
] | 5 | 7 | 2,512 | 1 | 0 | 2008-09-10T00:29:07.990000 | 2008-09-10T00:42:34.773000 |
53,224 | 53,246 | Getting international characters from a web page? | I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO! That is, html uses escaped markup for the special characters, such as Ä Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it. | I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so: >>> from BeautifulSoup import BeautifulSoup >>> html = " ÄÄRITALO! " >>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES) >>> print soup.contents[0].string ÄÄRITALO! (It would be nice if the standard codecs module included a codec for this, such that you could do "some_string".decode('html_entities') but unfortunately it doesn't!) EDIT: Another solution: Python developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones). | Getting international characters from a web page? I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO! That is, html uses escaped markup for the special characters, such as Ä Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it. | TITLE:
Getting international characters from a web page?
QUESTION:
I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO! That is, html uses escaped markup for the special characters, such as Ä Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it.
ANSWER:
I would recommend BeautifulSoup for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so: >>> from BeautifulSoup import BeautifulSoup >>> html = " ÄÄRITALO! " >>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES) >>> print soup.contents[0].string ÄÄRITALO! (It would be nice if the standard codecs module included a codec for this, such that you could do "some_string".decode('html_entities') but unfortunately it doesn't!) EDIT: Another solution: Python developer Fredrik Lundh (author of elementtree, among other things) has a function to unsecape HTML entities on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones). | [
"python",
"html",
"parsing",
"unicode"
] | 7 | 7 | 1,877 | 3 | 0 | 2008-09-10T00:30:42.697000 | 2008-09-10T00:50:19.747000 |
53,225 | 53,322 | How do you check whether a python method is bound or not? | Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to? | def isbound(method): return method.im_self is not None
def instance(bounded_method): return bounded_method.im_self User-defined methods: When a user-defined method object is created by retrieving a user-defined function object from a class, its im_self attribute is None and the method object is said to be unbound. When one is created by retrieving a user-defined function object from a class via one of its instances, its im_self attribute is the instance, and the method object is said to be bound. In either case, the new method's im_class attribute is the class from which the retrieval takes place, and its im_func attribute is the original function object. In Python 2.6 and 3.0: Instance method objects have new attributes for the object and function comprising the method; the new synonym for im_self is __self__, and im_func is also available as __func__. The old names are still supported in Python 2.6, but are gone in 3.0. | How do you check whether a python method is bound or not? Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to? | TITLE:
How do you check whether a python method is bound or not?
QUESTION:
Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to?
ANSWER:
def isbound(method): return method.im_self is not None
def instance(bounded_method): return bounded_method.im_self User-defined methods: When a user-defined method object is created by retrieving a user-defined function object from a class, its im_self attribute is None and the method object is said to be unbound. When one is created by retrieving a user-defined function object from a class via one of its instances, its im_self attribute is the instance, and the method object is said to be bound. In either case, the new method's im_class attribute is the class from which the retrieval takes place, and its im_func attribute is the original function object. In Python 2.6 and 3.0: Instance method objects have new attributes for the object and function comprising the method; the new synonym for im_self is __self__, and im_func is also available as __func__. The old names are still supported in Python 2.6, but are gone in 3.0. | [
"python",
"python-datamodel"
] | 48 | 49 | 17,930 | 5 | 0 | 2008-09-10T00:31:11.463000 | 2008-09-10T02:19:09.050000 |
53,253 | 53,352 | Should we stop using Zend WinEnabler? | (This question is over 6 years old and probably no longer has any relevance.) Our system uses Zend WinEnabler. Do you use it? Is it obsolete? Should we stop using it? Is it known to cause handle/memory leaks? Here is an (old) introduction to it: PHP Creators Unveil New Product that Makes PHP Truly Viable for Windows Environments | Since Zend appears to not be selling it anymore and all its functionality is available for free (through FastCGI), I would say so. Look at the Zend Core ( installing Zend Core ) if you really want to run PHP on Windows. But really, you should be asking yourself why you are running PHP on Windows at all (we used to do it, and the headaches where enormous, especially since nobody else was doing it). | Should we stop using Zend WinEnabler? (This question is over 6 years old and probably no longer has any relevance.) Our system uses Zend WinEnabler. Do you use it? Is it obsolete? Should we stop using it? Is it known to cause handle/memory leaks? Here is an (old) introduction to it: PHP Creators Unveil New Product that Makes PHP Truly Viable for Windows Environments | TITLE:
Should we stop using Zend WinEnabler?
QUESTION:
(This question is over 6 years old and probably no longer has any relevance.) Our system uses Zend WinEnabler. Do you use it? Is it obsolete? Should we stop using it? Is it known to cause handle/memory leaks? Here is an (old) introduction to it: PHP Creators Unveil New Product that Makes PHP Truly Viable for Windows Environments
ANSWER:
Since Zend appears to not be selling it anymore and all its functionality is available for free (through FastCGI), I would say so. Look at the Zend Core ( installing Zend Core ) if you really want to run PHP on Windows. But really, you should be asking yourself why you are running PHP on Windows at all (we used to do it, and the headaches where enormous, especially since nobody else was doing it). | [
"php",
"zend-server"
] | 1 | 0 | 526 | 1 | 0 | 2008-09-10T00:56:49.840000 | 2008-09-10T03:00:01.183000 |
53,256 | 53,261 | Getting closest element by id | I have two elements: When b is clicked, I want to access a and manipulate some of its data. A does not have a globally unique name, so document.getElementsByName is out. Looking into the event object, I thought event.target.parentNode would have some function like getElementsByName, but this does not seem to be the case with s. Is there any simple way to do this? | If a and b are next to each other and have the same parent, you can use the prevSibling property of b to find a. | Getting closest element by id I have two elements: When b is clicked, I want to access a and manipulate some of its data. A does not have a globally unique name, so document.getElementsByName is out. Looking into the event object, I thought event.target.parentNode would have some function like getElementsByName, but this does not seem to be the case with s. Is there any simple way to do this? | TITLE:
Getting closest element by id
QUESTION:
I have two elements: When b is clicked, I want to access a and manipulate some of its data. A does not have a globally unique name, so document.getElementsByName is out. Looking into the event object, I thought event.target.parentNode would have some function like getElementsByName, but this does not seem to be the case with s. Is there any simple way to do this?
ANSWER:
If a and b are next to each other and have the same parent, you can use the prevSibling property of b to find a. | [
"javascript",
"html",
"dom"
] | 0 | 5 | 2,317 | 4 | 0 | 2008-09-10T00:59:46.417000 | 2008-09-10T01:08:44.353000 |
53,260 | 53,315 | Retaining HTTP POST data when a request is interrupted by a login page | Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue. Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost. Can anyone recommend a way to handle this scenario when HTTP POST data is involved? Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request. Edit: One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment. | 2 choices: Write out the messy form from the login page, and JavaScript form.submit() it to the page. Have the login page itself POST to the requesting page (with the previous values), and have that page's controller perform the login verification. Roll this into whatever logic you already have for detecting the not logged in user (frameworks vary on how they do this). In pseudo-MVC: CommentController { void AddComment() { if (!Request.User.IsAuthenticated &&!AuthenticateUser()) { return; } // add comment to database }
bool AuthenticateUser() { if (Request.Form["username"] == "") { // show login page foreach (Key key in Request.Form) { // copy form values ViewData.Form.Add("hidden", key, Request.Form[key]); } ViewData.Form.Action = Request.Url;
ShowLoginView(); return false; } else { // validate login return TryLogin(Request.Form["username"], Request.Form["password"]); } } } | Retaining HTTP POST data when a request is interrupted by a login page Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue. Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost. Can anyone recommend a way to handle this scenario when HTTP POST data is involved? Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request. Edit: One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment. | TITLE:
Retaining HTTP POST data when a request is interrupted by a login page
QUESTION:
Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue. Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost. Can anyone recommend a way to handle this scenario when HTTP POST data is involved? Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request. Edit: One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment.
ANSWER:
2 choices: Write out the messy form from the login page, and JavaScript form.submit() it to the page. Have the login page itself POST to the requesting page (with the previous values), and have that page's controller perform the login verification. Roll this into whatever logic you already have for detecting the not logged in user (frameworks vary on how they do this). In pseudo-MVC: CommentController { void AddComment() { if (!Request.User.IsAuthenticated &&!AuthenticateUser()) { return; } // add comment to database }
bool AuthenticateUser() { if (Request.Form["username"] == "") { // show login page foreach (Key key in Request.Form) { // copy form values ViewData.Form.Add("hidden", key, Request.Form[key]); } ViewData.Form.Action = Request.Url;
ShowLoginView(); return false; } else { // validate login return TryLogin(Request.Form["username"], Request.Form["password"]); } } } | [
"language-agnostic"
] | 4 | 2 | 977 | 6 | 0 | 2008-09-10T01:08:20.297000 | 2008-09-10T02:08:17.610000 |
53,290 | 53,304 | How could I get my SVN-only host to pull from a git repository? | I'd really like to get our host to pull from our Git repository instead of uploading files manually, but it doesn't have Git installed. So is there a way to trick Subversion (which they do have) into checking out a Git repository? I think I already know the answer, namely bug my host to add Git and live with it until they do, but I thought I would ask anyway. | This page should provide a workaround for your problem. http://code.google.com/p/support/wiki/ImportingFromGit Basically, you create a read-only clone of your Git repository in the SVN repository format, exporting updates as you go. An SVN hook could be written that fires after each update to copy the new files where you need them. | How could I get my SVN-only host to pull from a git repository? I'd really like to get our host to pull from our Git repository instead of uploading files manually, but it doesn't have Git installed. So is there a way to trick Subversion (which they do have) into checking out a Git repository? I think I already know the answer, namely bug my host to add Git and live with it until they do, but I thought I would ask anyway. | TITLE:
How could I get my SVN-only host to pull from a git repository?
QUESTION:
I'd really like to get our host to pull from our Git repository instead of uploading files manually, but it doesn't have Git installed. So is there a way to trick Subversion (which they do have) into checking out a Git repository? I think I already know the answer, namely bug my host to add Git and live with it until they do, but I thought I would ask anyway.
ANSWER:
This page should provide a workaround for your problem. http://code.google.com/p/support/wiki/ImportingFromGit Basically, you create a read-only clone of your Git repository in the SVN repository format, exporting updates as you go. An SVN hook could be written that fires after each update to copy the new files where you need them. | [
"svn",
"git",
"build-automation",
"capistrano"
] | 6 | 3 | 155 | 1 | 0 | 2008-09-10T01:36:21.883000 | 2008-09-10T01:47:47.927000 |
53,292 | 53,313 | Ruby Package Include Problems | I'm trying to use the Optiflag package in my Ruby code and whenever I try to do the necessary require optiflag.rb, my program fails with the standard no such file to load -- optiflag message. I added the directory with that library to my $PATH variable, but it's still not working. Any ideas? | is it a gem? Are you doing require 'rubygems' require 'optiflag' or equivalent? | Ruby Package Include Problems I'm trying to use the Optiflag package in my Ruby code and whenever I try to do the necessary require optiflag.rb, my program fails with the standard no such file to load -- optiflag message. I added the directory with that library to my $PATH variable, but it's still not working. Any ideas? | TITLE:
Ruby Package Include Problems
QUESTION:
I'm trying to use the Optiflag package in my Ruby code and whenever I try to do the necessary require optiflag.rb, my program fails with the standard no such file to load -- optiflag message. I added the directory with that library to my $PATH variable, but it's still not working. Any ideas?
ANSWER:
is it a gem? Are you doing require 'rubygems' require 'optiflag' or equivalent? | [
"ruby",
"macos"
] | 1 | 4 | 569 | 3 | 0 | 2008-09-10T01:36:47.280000 | 2008-09-10T02:02:45.953000 |
53,295 | 53,314 | Java Web Services API, however I can't run a JVM on my server | I'm trying to use some data from a PlanPlusOnline account. They only provide a java web services API. The server for the site where the data will be used does not allow me to install Tomcat (edit: or a JVM for that matter). I'm not going to lie, I am a Java software engineer, and I do some web work on the side. I'm not familiar with web services or servlets, but I was willing to give it a shot. I'd much rather they have JSON access to the data, but as far as I know they don't. Any ideas? EDIT: to clarify. The web service provided by planplusonline is Java based. I am trying to access the data from this web service without using Java. I believe this is possible now, but I need to do more research. Anyone who can help point me in the right direction is appreciated. | To follow up with jodonnell's comment, a Web service connection can be made in just about any server-side language. It is just that the API example they provided was in Java probably because PlanPlusOnline is written in Java. If you have a URL for the service, and an access key, then all you really need to do is figure out how to traverse the XML returned. If you can't do Java, then I suggest PHP because it could be already installed, and have the proper modules loaded. This link might be helpful: http://www.onlamp.com/pub/a/php/2007/07/26/php-web-services.html | Java Web Services API, however I can't run a JVM on my server I'm trying to use some data from a PlanPlusOnline account. They only provide a java web services API. The server for the site where the data will be used does not allow me to install Tomcat (edit: or a JVM for that matter). I'm not going to lie, I am a Java software engineer, and I do some web work on the side. I'm not familiar with web services or servlets, but I was willing to give it a shot. I'd much rather they have JSON access to the data, but as far as I know they don't. Any ideas? EDIT: to clarify. The web service provided by planplusonline is Java based. I am trying to access the data from this web service without using Java. I believe this is possible now, but I need to do more research. Anyone who can help point me in the right direction is appreciated. | TITLE:
Java Web Services API, however I can't run a JVM on my server
QUESTION:
I'm trying to use some data from a PlanPlusOnline account. They only provide a java web services API. The server for the site where the data will be used does not allow me to install Tomcat (edit: or a JVM for that matter). I'm not going to lie, I am a Java software engineer, and I do some web work on the side. I'm not familiar with web services or servlets, but I was willing to give it a shot. I'd much rather they have JSON access to the data, but as far as I know they don't. Any ideas? EDIT: to clarify. The web service provided by planplusonline is Java based. I am trying to access the data from this web service without using Java. I believe this is possible now, but I need to do more research. Anyone who can help point me in the right direction is appreciated.
ANSWER:
To follow up with jodonnell's comment, a Web service connection can be made in just about any server-side language. It is just that the API example they provided was in Java probably because PlanPlusOnline is written in Java. If you have a URL for the service, and an access key, then all you really need to do is figure out how to traverse the XML returned. If you can't do Java, then I suggest PHP because it could be already installed, and have the proper modules loaded. This link might be helpful: http://www.onlamp.com/pub/a/php/2007/07/26/php-web-services.html | [
"java",
"json",
"web-services"
] | 1 | 1 | 678 | 3 | 0 | 2008-09-10T01:39:55.683000 | 2008-09-10T02:07:11.857000 |
53,309 | 53,959 | Best way to draw text with OpenGL and Cocoa? | Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints. The text on screen may change from frame to frame (for example, a framerate display in the corner) I would like to be able to select any font installed on the system at any size | Have you taken a look at the Cocoa OpenGL sample code? It includes "a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering." | Best way to draw text with OpenGL and Cocoa? Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints. The text on screen may change from frame to frame (for example, a framerate display in the corner) I would like to be able to select any font installed on the system at any size | TITLE:
Best way to draw text with OpenGL and Cocoa?
QUESTION:
Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints. The text on screen may change from frame to frame (for example, a framerate display in the corner) I would like to be able to select any font installed on the system at any size
ANSWER:
Have you taken a look at the Cocoa OpenGL sample code? It includes "a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering." | [
"cocoa",
"opengl"
] | 9 | 8 | 4,110 | 1 | 0 | 2008-09-10T01:52:52.110000 | 2008-09-10T13:02:20.300000 |
53,316 | 54,792 | Hibernate crops clob values oddly | I have a one to many relationship between two tables. The many table contains a clob column. The clob column looks like this in hibernate: @CollectionOfElements(fetch = EAGER) @JoinTable(name = NOTE_JOIN_TABLE, joinColumns = @JoinColumn(name = "note")) @Column(name = "substitution") @IndexColumn(name = "listIndex", base = 0) @Lob private List substitutions; So basically I may have a Note with some subsitutions, say "foo" and "fizzbuzz". So in my main table I could have a Note with id 4 and in my NOTE_JOIN_TABLE I would have two rows, "foo" and "fizzbuzz" that both have a relationship to the Note. However, when one of these is inserted into the DB the larger substitution values are cropped to be as long as the shortest. So in this case I would have "foo" and "fiz" in the DB instead of "foo" and "fizzbuzz". Do you have any idea why this is happening? I have checked and confirmed they aren't being cropped anywhere in our code, it's defintely hibernate. | Many JDBC drivers, early versions of Oracle in particular, have problems while inserting LOBs. Did you make sure that the query Hibernate fires, with the same parameters bound works successfully in your JDBC driver? | Hibernate crops clob values oddly I have a one to many relationship between two tables. The many table contains a clob column. The clob column looks like this in hibernate: @CollectionOfElements(fetch = EAGER) @JoinTable(name = NOTE_JOIN_TABLE, joinColumns = @JoinColumn(name = "note")) @Column(name = "substitution") @IndexColumn(name = "listIndex", base = 0) @Lob private List substitutions; So basically I may have a Note with some subsitutions, say "foo" and "fizzbuzz". So in my main table I could have a Note with id 4 and in my NOTE_JOIN_TABLE I would have two rows, "foo" and "fizzbuzz" that both have a relationship to the Note. However, when one of these is inserted into the DB the larger substitution values are cropped to be as long as the shortest. So in this case I would have "foo" and "fiz" in the DB instead of "foo" and "fizzbuzz". Do you have any idea why this is happening? I have checked and confirmed they aren't being cropped anywhere in our code, it's defintely hibernate. | TITLE:
Hibernate crops clob values oddly
QUESTION:
I have a one to many relationship between two tables. The many table contains a clob column. The clob column looks like this in hibernate: @CollectionOfElements(fetch = EAGER) @JoinTable(name = NOTE_JOIN_TABLE, joinColumns = @JoinColumn(name = "note")) @Column(name = "substitution") @IndexColumn(name = "listIndex", base = 0) @Lob private List substitutions; So basically I may have a Note with some subsitutions, say "foo" and "fizzbuzz". So in my main table I could have a Note with id 4 and in my NOTE_JOIN_TABLE I would have two rows, "foo" and "fizzbuzz" that both have a relationship to the Note. However, when one of these is inserted into the DB the larger substitution values are cropped to be as long as the shortest. So in this case I would have "foo" and "fiz" in the DB instead of "foo" and "fizzbuzz". Do you have any idea why this is happening? I have checked and confirmed they aren't being cropped anywhere in our code, it's defintely hibernate.
ANSWER:
Many JDBC drivers, early versions of Oracle in particular, have problems while inserting LOBs. Did you make sure that the query Hibernate fires, with the same parameters bound works successfully in your JDBC driver? | [
"java",
"oracle",
"hibernate"
] | 1 | 0 | 1,197 | 2 | 0 | 2008-09-10T02:09:02.197000 | 2008-09-10T17:36:46.130000 |
53,347 | 53,399 | Crystal Report icons/toolbar not working when deployed on web server | I have built a web page which contains a Crystal Report built using the Crystal libraries included in Visual Studio 2008. It ' works on my machine ' but when deployed to the web server the icons (Export, Print etc) on the Crystal Report toolbar do not display or work. Just seeing the 'red X' where the button image should be and clicking does nothing. I have checked that the toolbar images are actually in the location being looked at on the web server: (C:/Inetpub/wwwroot/aspnet_client/system_web/2_0_50727/CrystalReportWebFormViewer4/images/toolbar/) They are all there. I have checked the permissions on the above mentioned folder on the web server. Gave 'full control' to every user just to test it. I have also intalled/run the 'CRRedist2008_x86.msi' on the web server. Some people have mentioned ensuring that 'crystalreportviewers115' folder is added to my '\wwwroot\aspnet_client\system_web\2_0_50727' folder on the web server but I have been unable to find the 'crystalreportviewers115' to copy it. Appreciate any help or ideas you may be able to offer. Update - OK, so obviously I hadn't checked well enough that the images were in the correct location. | Doh! Someone else here at work figured this out. It was really simple and I should have been able to sort it, but hey, that's how it goes sometimes. Here the fix: On the web server, copy the 'aspnet_client' folder from 'C:\Inetpub\wwwroot' to 'C:\Inetpub\wwwroot\your-website-name'. That's all we did and it's now working. Hope this saves someone from all the fuss I just went through. | Crystal Report icons/toolbar not working when deployed on web server I have built a web page which contains a Crystal Report built using the Crystal libraries included in Visual Studio 2008. It ' works on my machine ' but when deployed to the web server the icons (Export, Print etc) on the Crystal Report toolbar do not display or work. Just seeing the 'red X' where the button image should be and clicking does nothing. I have checked that the toolbar images are actually in the location being looked at on the web server: (C:/Inetpub/wwwroot/aspnet_client/system_web/2_0_50727/CrystalReportWebFormViewer4/images/toolbar/) They are all there. I have checked the permissions on the above mentioned folder on the web server. Gave 'full control' to every user just to test it. I have also intalled/run the 'CRRedist2008_x86.msi' on the web server. Some people have mentioned ensuring that 'crystalreportviewers115' folder is added to my '\wwwroot\aspnet_client\system_web\2_0_50727' folder on the web server but I have been unable to find the 'crystalreportviewers115' to copy it. Appreciate any help or ideas you may be able to offer. Update - OK, so obviously I hadn't checked well enough that the images were in the correct location. | TITLE:
Crystal Report icons/toolbar not working when deployed on web server
QUESTION:
I have built a web page which contains a Crystal Report built using the Crystal libraries included in Visual Studio 2008. It ' works on my machine ' but when deployed to the web server the icons (Export, Print etc) on the Crystal Report toolbar do not display or work. Just seeing the 'red X' where the button image should be and clicking does nothing. I have checked that the toolbar images are actually in the location being looked at on the web server: (C:/Inetpub/wwwroot/aspnet_client/system_web/2_0_50727/CrystalReportWebFormViewer4/images/toolbar/) They are all there. I have checked the permissions on the above mentioned folder on the web server. Gave 'full control' to every user just to test it. I have also intalled/run the 'CRRedist2008_x86.msi' on the web server. Some people have mentioned ensuring that 'crystalreportviewers115' folder is added to my '\wwwroot\aspnet_client\system_web\2_0_50727' folder on the web server but I have been unable to find the 'crystalreportviewers115' to copy it. Appreciate any help or ideas you may be able to offer. Update - OK, so obviously I hadn't checked well enough that the images were in the correct location.
ANSWER:
Doh! Someone else here at work figured this out. It was really simple and I should have been able to sort it, but hey, that's how it goes sometimes. Here the fix: On the web server, copy the 'aspnet_client' folder from 'C:\Inetpub\wwwroot' to 'C:\Inetpub\wwwroot\your-website-name'. That's all we did and it's now working. Hope this saves someone from all the fuss I just went through. | [
"visual-studio-2008",
"deployment",
"crystal-reports"
] | 6 | 6 | 21,329 | 5 | 0 | 2008-09-10T02:55:05.080000 | 2008-09-10T04:07:17.950000 |
53,364 | 53,735 | Printing in Adobe AIR - Standalone PDF Generation | Is it possible to generate PDF Documents in an Adobe AIR application without resorting to a round trip web service for generating the PDF? I've looked at the initial Flex Reports on GoogleCode but it requires a round trip for generating the actual PDF. Given that AIR is supposed to be the Desktop end for RIAs is there a way to accomplish this? I suspect I am overlooking something but my searches through the documentation don't reveal too much and given the target for AIR I can't believe that it's just something they didn't include. | There's AlivePDF, which is a PDF generation library for ActionScript that should work, it was made just for the situation you describe. | Printing in Adobe AIR - Standalone PDF Generation Is it possible to generate PDF Documents in an Adobe AIR application without resorting to a round trip web service for generating the PDF? I've looked at the initial Flex Reports on GoogleCode but it requires a round trip for generating the actual PDF. Given that AIR is supposed to be the Desktop end for RIAs is there a way to accomplish this? I suspect I am overlooking something but my searches through the documentation don't reveal too much and given the target for AIR I can't believe that it's just something they didn't include. | TITLE:
Printing in Adobe AIR - Standalone PDF Generation
QUESTION:
Is it possible to generate PDF Documents in an Adobe AIR application without resorting to a round trip web service for generating the PDF? I've looked at the initial Flex Reports on GoogleCode but it requires a round trip for generating the actual PDF. Given that AIR is supposed to be the Desktop end for RIAs is there a way to accomplish this? I suspect I am overlooking something but my searches through the documentation don't reveal too much and given the target for AIR I can't believe that it's just something they didn't include.
ANSWER:
There's AlivePDF, which is a PDF generation library for ActionScript that should work, it was made just for the situation you describe. | [
"apache-flex",
"printing",
"air",
"ria"
] | 6 | 7 | 8,348 | 4 | 0 | 2008-09-10T03:18:13.947000 | 2008-09-10T10:09:40.823000 |
53,365 | 53,419 | Getting hibernate to log clob parameters | (see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log? | Try using: log4j.logger.net.sf.hibernate=DEBUG log4j.logger.org.hibernate=DEBUG That's the finest level you'll get. If it does not show the information you want, then it's not possible. | Getting hibernate to log clob parameters (see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log? | TITLE:
Getting hibernate to log clob parameters
QUESTION:
(see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?,?,?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log?
ANSWER:
Try using: log4j.logger.net.sf.hibernate=DEBUG log4j.logger.org.hibernate=DEBUG That's the finest level you'll get. If it does not show the information you want, then it's not possible. | [
"java",
"oracle",
"hibernate"
] | 4 | 1 | 3,224 | 3 | 0 | 2008-09-10T03:19:38.313000 | 2008-09-10T04:39:36.500000 |
53,370 | 53,374 | Where are people getting that rotating loading image? | I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from? | You can get many different AJAX loading animations in any colour you want here: ajaxload.info | Where are people getting that rotating loading image? I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from? | TITLE:
Where are people getting that rotating loading image?
QUESTION:
I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from?
ANSWER:
You can get many different AJAX loading animations in any colour you want here: ajaxload.info | [
"ajax",
"image",
"animation"
] | 2 | 14 | 697 | 3 | 0 | 2008-09-10T03:28:04.953000 | 2008-09-10T03:31:06.117000 |
53,379 | 53,449 | using DBMS_APPLICATION_INFO with Jboss | Does anyone have examples of how to use DBMS_APPLICATION_INFO package with JBOSS? We have a various applications which run within JBOSS and share db pools. I would like, at the start of each session these applications to identify themselves to the database using DBMS_APPLICATION_INFO so I can more easily track which sections of the application is causing database issues. I'm not too familiar with session life cycles in JBOSS, but at the end of the day, what needs to happen is at the start and end of a transaction, this package needs to be called. Has anyone done this before? | yes, you can write a wrapper class around your connection pool, and a wraper around the connection so lets say you have: OracleConnection conn=connectionPool.getConnection("java:scott@mydb"); Change it to: public class LoggingConnectionPool extends ConnectionPool{ public OracleConnection getConnection(String datasourceName, String module, String action){ OracleConnection conn=getConnection(datasourceName); CallableStatement call=conn.preparedCall("begin dbms_application_info.setModule(module_name =>?, action_name =>?); end;"); try{ call.setString(1,module); call.setString(2,action); call.execute(); finally{ call.close(); } return new WrappedOracleConnection(conn); } Note the use of WrappedOracleConnection above. You need this because you need to trap the close call public class WrappedOracleConnection extends OracleConnection{ public void close(){ CallableStatement call=this.preparedCall("begin dbms_application_info.setModule(module_name =>?, action_name =>?); end;"); try{ call.setNull(1,Types.VARCHAR); call.setNull(2,Types.VARCHAR); call.execute(); finally{ call.close(); } }
// and you need to implement every other method //for example public CallableStatement prepareCall(String command){ return super.prepareCall(command); }... } Hope this helps, I do something similar on a development server to catch connections that are not closed (not returned to the pool). | using DBMS_APPLICATION_INFO with Jboss Does anyone have examples of how to use DBMS_APPLICATION_INFO package with JBOSS? We have a various applications which run within JBOSS and share db pools. I would like, at the start of each session these applications to identify themselves to the database using DBMS_APPLICATION_INFO so I can more easily track which sections of the application is causing database issues. I'm not too familiar with session life cycles in JBOSS, but at the end of the day, what needs to happen is at the start and end of a transaction, this package needs to be called. Has anyone done this before? | TITLE:
using DBMS_APPLICATION_INFO with Jboss
QUESTION:
Does anyone have examples of how to use DBMS_APPLICATION_INFO package with JBOSS? We have a various applications which run within JBOSS and share db pools. I would like, at the start of each session these applications to identify themselves to the database using DBMS_APPLICATION_INFO so I can more easily track which sections of the application is causing database issues. I'm not too familiar with session life cycles in JBOSS, but at the end of the day, what needs to happen is at the start and end of a transaction, this package needs to be called. Has anyone done this before?
ANSWER:
yes, you can write a wrapper class around your connection pool, and a wraper around the connection so lets say you have: OracleConnection conn=connectionPool.getConnection("java:scott@mydb"); Change it to: public class LoggingConnectionPool extends ConnectionPool{ public OracleConnection getConnection(String datasourceName, String module, String action){ OracleConnection conn=getConnection(datasourceName); CallableStatement call=conn.preparedCall("begin dbms_application_info.setModule(module_name =>?, action_name =>?); end;"); try{ call.setString(1,module); call.setString(2,action); call.execute(); finally{ call.close(); } return new WrappedOracleConnection(conn); } Note the use of WrappedOracleConnection above. You need this because you need to trap the close call public class WrappedOracleConnection extends OracleConnection{ public void close(){ CallableStatement call=this.preparedCall("begin dbms_application_info.setModule(module_name =>?, action_name =>?); end;"); try{ call.setNull(1,Types.VARCHAR); call.setNull(2,Types.VARCHAR); call.execute(); finally{ call.close(); } }
// and you need to implement every other method //for example public CallableStatement prepareCall(String command){ return super.prepareCall(command); }... } Hope this helps, I do something similar on a development server to catch connections that are not closed (not returned to the pool). | [
"java",
"oracle",
"jboss"
] | 1 | 1 | 5,573 | 3 | 0 | 2008-09-10T03:37:33.183000 | 2008-09-10T05:06:59.677000 |
53,395 | 84,484 | Suggestions wanted with Lists or Enumerators of T when inheriting from generic classes | I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers. Abstract class: public interface IOtherObjects;
public abstract class MyObjects where T: IOtherObjects {...
public List ToList() {... } } Children: public class MyObjectsA: MyObjects //(where OtherObjectA implements IOtherObjects) {
}
public class MyObjectsB: MyObjects //(where OtherObjectB implements IOtherObjects) {
} Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to ToList method of the MyObjects base class, as we do not specifically know the type of T at this point. EDIT As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it. EDIT @Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type) @aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds EDIT @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input. EDIT @Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects. @Rob Again Thanks. That has usually been my cludgy workaround (no disrespect:) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem. QUALIFYING EDIT in case I have been a little confusing To be more precise, (I may have let my latest implementation of this get it too complex): lets say I have 2 object types, B and C inheriting from object A. Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A. The above example was a watered-down example of the List Of Less Specific problem's latest incarnation. Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand @Rob Yet Again and Sara Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here. The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels). My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit. I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it. | If you have class B: A class C: A And you have List listB; List listC; that you wish to treat as a List of the parent type Then you should use List listA = listB.Cast ().Concat(listC.Cast ()).ToList() | Suggestions wanted with Lists or Enumerators of T when inheriting from generic classes I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers. Abstract class: public interface IOtherObjects;
public abstract class MyObjects where T: IOtherObjects {...
public List ToList() {... } } Children: public class MyObjectsA: MyObjects //(where OtherObjectA implements IOtherObjects) {
}
public class MyObjectsB: MyObjects //(where OtherObjectB implements IOtherObjects) {
} Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to ToList method of the MyObjects base class, as we do not specifically know the type of T at this point. EDIT As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it. EDIT @Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type) @aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds EDIT @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input. EDIT @Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects. @Rob Again Thanks. That has usually been my cludgy workaround (no disrespect:) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem. QUALIFYING EDIT in case I have been a little confusing To be more precise, (I may have let my latest implementation of this get it too complex): lets say I have 2 object types, B and C inheriting from object A. Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A. The above example was a watered-down example of the List Of Less Specific problem's latest incarnation. Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand @Rob Yet Again and Sara Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here. The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels). My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit. I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it. | TITLE:
Suggestions wanted with Lists or Enumerators of T when inheriting from generic classes
QUESTION:
I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers. Abstract class: public interface IOtherObjects;
public abstract class MyObjects where T: IOtherObjects {...
public List ToList() {... } } Children: public class MyObjectsA: MyObjects //(where OtherObjectA implements IOtherObjects) {
}
public class MyObjectsB: MyObjects //(where OtherObjectB implements IOtherObjects) {
} Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to ToList method of the MyObjects base class, as we do not specifically know the type of T at this point. EDIT As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it. EDIT @Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type) @aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds EDIT @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input. EDIT @Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects. @Rob Again Thanks. That has usually been my cludgy workaround (no disrespect:) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem. QUALIFYING EDIT in case I have been a little confusing To be more precise, (I may have let my latest implementation of this get it too complex): lets say I have 2 object types, B and C inheriting from object A. Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A. The above example was a watered-down example of the List Of Less Specific problem's latest incarnation. Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand @Rob Yet Again and Sara Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here. The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels). My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit. I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it.
ANSWER:
If you have class B: A class C: A And you have List listB; List listC; that you wish to treat as a List of the parent type Then you should use List listA = listB.Cast ().Concat(listC.Cast ()).ToList() | [
"c#",
"generics"
] | 2 | 1 | 317 | 7 | 0 | 2008-09-10T04:03:08.257000 | 2008-09-17T15:27:57.177000 |
53,411 | 53,420 | wav <> mp3 for flash(as3) | I'm wondering about MP3 decoding/encoding, and I was hoping to pull this off in Flash using AS3 I'm sure it'll be a right pain... I have no idea where to start, can anyone offer any pointers? reference material? ----much later--- Thank you all very much for your input... It seems I have a long road ahead of me yet! | See LAME MP3 Encoder. You can checkout their source code and their link page. Mpeg.org should have documents too. | wav <> mp3 for flash(as3) I'm wondering about MP3 decoding/encoding, and I was hoping to pull this off in Flash using AS3 I'm sure it'll be a right pain... I have no idea where to start, can anyone offer any pointers? reference material? ----much later--- Thank you all very much for your input... It seems I have a long road ahead of me yet! | TITLE:
wav <> mp3 for flash(as3)
QUESTION:
I'm wondering about MP3 decoding/encoding, and I was hoping to pull this off in Flash using AS3 I'm sure it'll be a right pain... I have no idea where to start, can anyone offer any pointers? reference material? ----much later--- Thank you all very much for your input... It seems I have a long road ahead of me yet!
ANSWER:
See LAME MP3 Encoder. You can checkout their source code and their link page. Mpeg.org should have documents too. | [
"actionscript-3",
"mp3",
"decode",
"shockwave"
] | 5 | 1 | 7,798 | 6 | 0 | 2008-09-10T04:21:42.360000 | 2008-09-10T04:40:46.980000 |
53,428 | 54,046 | What are some good Python ORM solutions? | I'm evaluating and looking at using CherryPy for a project that's basically a JavaScript front-end from the client-side (browser) that talks to a Python web service on the back-end. So, I really need something fast and lightweight on the back-end that I can implement using Python that then speaks to the PostgreSQL DB via an ORM (JSON to the browser). I'm also looking at Django, which I like, since its ORM is built-in. However, I think Django might be a little more than I really need (i.e. more features than I really need == slower?). Anyone have any experience with different Python ORM solutions that can compare and contrast their features and functionality, speed, efficiency, etc.? | SQLAlchemy is more full-featured and powerful (uses the DataMapper pattern). Django ORM has a cleaner syntax and is easier to write for (ActiveRecord pattern). I don't know about performance differences. SQLAlchemy also has a declarative layer that hides some complexity and gives it a ActiveRecord-style syntax more similar to the Django ORM. I wouldn't worry about Django being "too heavy." It's decoupled enough that you can use the ORM if you want without having to import the rest. That said, if I were already using CherryPy for the web layer and just needed an ORM, I'd probably opt for SQLAlchemy. | What are some good Python ORM solutions? I'm evaluating and looking at using CherryPy for a project that's basically a JavaScript front-end from the client-side (browser) that talks to a Python web service on the back-end. So, I really need something fast and lightweight on the back-end that I can implement using Python that then speaks to the PostgreSQL DB via an ORM (JSON to the browser). I'm also looking at Django, which I like, since its ORM is built-in. However, I think Django might be a little more than I really need (i.e. more features than I really need == slower?). Anyone have any experience with different Python ORM solutions that can compare and contrast their features and functionality, speed, efficiency, etc.? | TITLE:
What are some good Python ORM solutions?
QUESTION:
I'm evaluating and looking at using CherryPy for a project that's basically a JavaScript front-end from the client-side (browser) that talks to a Python web service on the back-end. So, I really need something fast and lightweight on the back-end that I can implement using Python that then speaks to the PostgreSQL DB via an ORM (JSON to the browser). I'm also looking at Django, which I like, since its ORM is built-in. However, I think Django might be a little more than I really need (i.e. more features than I really need == slower?). Anyone have any experience with different Python ORM solutions that can compare and contrast their features and functionality, speed, efficiency, etc.?
ANSWER:
SQLAlchemy is more full-featured and powerful (uses the DataMapper pattern). Django ORM has a cleaner syntax and is easier to write for (ActiveRecord pattern). I don't know about performance differences. SQLAlchemy also has a declarative layer that hides some complexity and gives it a ActiveRecord-style syntax more similar to the Django ORM. I wouldn't worry about Django being "too heavy." It's decoupled enough that you can use the ORM if you want without having to import the rest. That said, if I were already using CherryPy for the web layer and just needed an ORM, I'd probably opt for SQLAlchemy. | [
"python",
"orm"
] | 257 | 136 | 228,475 | 12 | 0 | 2008-09-10T04:49:32.667000 | 2008-09-10T13:37:05.410000 |
53,435 | 66,625 | Getting IIS Worker Process Crash dumps | I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process. I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain. When this crash happens, where can I find the crash dump for analysis? | Download Debugging tools for Windows: http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES: http://support.microsoft.com/kb/286350 The command should be something like (if you are using IIS6): cscript adplus.vbs -crash -pn w3wp.exe This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file). You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump... By default, WinDBG will show you (next to the command line) the thread were the process crashed. The first thing you need to do in WinDBG is to load the.NET Framework extensions:.loadby sos mscorwks then, you will display the managed callstack:!clrstack if the thread was not running managed code, then you'll need to check the native stack: kpn 200 This should give you some ideas. To continue troubleshooting I recommend you read the following article: http://msdn.microsoft.com/en-us/library/ee817663.aspx | Getting IIS Worker Process Crash dumps I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process. I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain. When this crash happens, where can I find the crash dump for analysis? | TITLE:
Getting IIS Worker Process Crash dumps
QUESTION:
I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process. I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain. When this crash happens, where can I find the crash dump for analysis?
ANSWER:
Download Debugging tools for Windows: http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES: http://support.microsoft.com/kb/286350 The command should be something like (if you are using IIS6): cscript adplus.vbs -crash -pn w3wp.exe This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file). You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump... By default, WinDBG will show you (next to the command line) the thread were the process crashed. The first thing you need to do in WinDBG is to load the.NET Framework extensions:.loadby sos mscorwks then, you will display the managed callstack:!clrstack if the thread was not running managed code, then you'll need to check the native stack: kpn 200 This should give you some ideas. To continue troubleshooting I recommend you read the following article: http://msdn.microsoft.com/en-us/library/ee817663.aspx | [
"c#",
"asp.net",
"multithreading",
"debugging",
"iis"
] | 9 | 16 | 19,920 | 2 | 0 | 2008-09-10T04:52:58.557000 | 2008-09-15T20:29:59.543000 |
53,439 | 53,453 | Is it possible to unit test a class that makes P/Invoke calls? | I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system. | Guideline: Don't test code that you haven't written. You shouldn't be concerned with WinAPI implementation not working (most probably it works as expected). Your concern should be testing the 'Wiring' i.e. if your code makes the right WinAPI call. In which case, all you need is to mock out the interface and let the mock framework tell if you the call was made with the right params. If yes, you're done. Create IWinAPIFacade (with relevant WinAPI methods) and implementation CWinAPIFacade. Write a test which plugs in a mock of IWinAPIFacade and verify that the appropriate call is made Write a test to ensure that CWinAPIFacade is created and plugged in as a default (in normal functioning) Implement CWinAPIFacade which simply blind-delegates to Platform Invoke calls - no need to auto-test this layer. Just do a manual verification. Hopefully this won't change that often and nothing breaks. If you find that it does in the future, barricade it with some tests. | Is it possible to unit test a class that makes P/Invoke calls? I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system. | TITLE:
Is it possible to unit test a class that makes P/Invoke calls?
QUESTION:
I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system.
ANSWER:
Guideline: Don't test code that you haven't written. You shouldn't be concerned with WinAPI implementation not working (most probably it works as expected). Your concern should be testing the 'Wiring' i.e. if your code makes the right WinAPI call. In which case, all you need is to mock out the interface and let the mock framework tell if you the call was made with the right params. If yes, you're done. Create IWinAPIFacade (with relevant WinAPI methods) and implementation CWinAPIFacade. Write a test which plugs in a mock of IWinAPIFacade and verify that the appropriate call is made Write a test to ensure that CWinAPIFacade is created and plugged in as a default (in normal functioning) Implement CWinAPIFacade which simply blind-delegates to Platform Invoke calls - no need to auto-test this layer. Just do a manual verification. Hopefully this won't change that often and nothing breaks. If you find that it does in the future, barricade it with some tests. | [
"c#",
"unit-testing",
"unmanaged",
"impersonation"
] | 8 | 12 | 1,679 | 2 | 0 | 2008-09-10T04:55:21.780000 | 2008-09-10T05:12:00.897000 |
53,450 | 53,451 | When building a Handler, should it be .ashx or .axd? | Say I'm building an ASP.Net class that inherits from IHttpHandler, should I wire this up to a URL ending in.ashx, or should I use the.axd extension? Does it matter as long as there's no naming conflict? | Ahh.. ScottGu says it doesn't matter, but.ashx is slightly better because there's less chance of a conflict with things like trace.axd and others. That's why the flag went up in my head that.ashx might be better. http://forums.asp.net/t/964074.aspx | When building a Handler, should it be .ashx or .axd? Say I'm building an ASP.Net class that inherits from IHttpHandler, should I wire this up to a URL ending in.ashx, or should I use the.axd extension? Does it matter as long as there's no naming conflict? | TITLE:
When building a Handler, should it be .ashx or .axd?
QUESTION:
Say I'm building an ASP.Net class that inherits from IHttpHandler, should I wire this up to a URL ending in.ashx, or should I use the.axd extension? Does it matter as long as there's no naming conflict?
ANSWER:
Ahh.. ScottGu says it doesn't matter, but.ashx is slightly better because there's less chance of a conflict with things like trace.axd and others. That's why the flag went up in my head that.ashx might be better. http://forums.asp.net/t/964074.aspx | [
"asp.net"
] | 3 | 3 | 1,616 | 2 | 0 | 2008-09-10T05:07:25.777000 | 2008-09-10T05:10:00.560000 |
53,464 | 53,488 | How to stop IIS asking authentication for default website on localhost | I have IIS 5.1 installed on Windows XP Pro SP2. Besides I have installed VS 2008 Express with.NET 3.5. So obviously IIS is configured for ASP.NET automatically for.NET 3.5 The problem is whenever I access http://localhost IE & Firefox both presents authentication box. Even if I enter Administrator user and its password, the authentication fails. I have already checked the anonymous user access (with IUSR_ user and password is controlled by IIS) in Directory Security options of default website. However other deployed web apps work fine (does not ask for any authentication). In IE this authentication process stops if I add http://localhost in Intranet sites option. Please note that the file system is FAT32 when IIS is installed. Regards, Jatan | This is most likely a NT file permissions problem. IUSR_ needs to have file system permissions to read whatever file you're requesting (like /inetpub/wwwroot/index.htm). If you still have trouble, check the IIS logs, typically at \windows\system32\logfiles\W3SVC*. | How to stop IIS asking authentication for default website on localhost I have IIS 5.1 installed on Windows XP Pro SP2. Besides I have installed VS 2008 Express with.NET 3.5. So obviously IIS is configured for ASP.NET automatically for.NET 3.5 The problem is whenever I access http://localhost IE & Firefox both presents authentication box. Even if I enter Administrator user and its password, the authentication fails. I have already checked the anonymous user access (with IUSR_ user and password is controlled by IIS) in Directory Security options of default website. However other deployed web apps work fine (does not ask for any authentication). In IE this authentication process stops if I add http://localhost in Intranet sites option. Please note that the file system is FAT32 when IIS is installed. Regards, Jatan | TITLE:
How to stop IIS asking authentication for default website on localhost
QUESTION:
I have IIS 5.1 installed on Windows XP Pro SP2. Besides I have installed VS 2008 Express with.NET 3.5. So obviously IIS is configured for ASP.NET automatically for.NET 3.5 The problem is whenever I access http://localhost IE & Firefox both presents authentication box. Even if I enter Administrator user and its password, the authentication fails. I have already checked the anonymous user access (with IUSR_ user and password is controlled by IIS) in Directory Security options of default website. However other deployed web apps work fine (does not ask for any authentication). In IE this authentication process stops if I add http://localhost in Intranet sites option. Please note that the file system is FAT32 when IIS is installed. Regards, Jatan
ANSWER:
This is most likely a NT file permissions problem. IUSR_ needs to have file system permissions to read whatever file you're requesting (like /inetpub/wwwroot/index.htm). If you still have trouble, check the IIS logs, typically at \windows\system32\logfiles\W3SVC*. | [
"internet-explorer",
"iis",
"authentication"
] | 14 | 15 | 122,530 | 7 | 0 | 2008-09-10T05:20:54.607000 | 2008-09-10T05:50:28.170000 |
53,472 | 53,588 | What is the best way to convert a Ruby string range to a Range object | I have some Ruby code which takes dates on the command line in the format: -d 20080101,20080201..20080229,20080301 I want to run for all dates between 20080201 and 20080229 inclusive and the other dates present in the list. I can get the string 20080201..20080229, so is the best way to convert this to a Range instance? Currently, I am using eval, but it feels like there should be a better way. @Purfideas I was kind of looking for a more general answer for converting any string of type int..int to a Range I guess. | But then just do ends = '20080201..20080229'.split('..').map{|d| Integer(d)} ends[0]..ends[1] anyway I don't recommend eval, for security reasons | What is the best way to convert a Ruby string range to a Range object I have some Ruby code which takes dates on the command line in the format: -d 20080101,20080201..20080229,20080301 I want to run for all dates between 20080201 and 20080229 inclusive and the other dates present in the list. I can get the string 20080201..20080229, so is the best way to convert this to a Range instance? Currently, I am using eval, but it feels like there should be a better way. @Purfideas I was kind of looking for a more general answer for converting any string of type int..int to a Range I guess. | TITLE:
What is the best way to convert a Ruby string range to a Range object
QUESTION:
I have some Ruby code which takes dates on the command line in the format: -d 20080101,20080201..20080229,20080301 I want to run for all dates between 20080201 and 20080229 inclusive and the other dates present in the list. I can get the string 20080201..20080229, so is the best way to convert this to a Range instance? Currently, I am using eval, but it feels like there should be a better way. @Purfideas I was kind of looking for a more general answer for converting any string of type int..int to a Range I guess.
ANSWER:
But then just do ends = '20080201..20080229'.split('..').map{|d| Integer(d)} ends[0]..ends[1] anyway I don't recommend eval, for security reasons | [
"ruby"
] | 24 | 14 | 15,674 | 9 | 0 | 2008-09-10T05:28:46.100000 | 2008-09-10T08:04:33.387000 |
53,473 | 53,593 | Javascript - Applying class to an HTML tag given an attribute/value | I am trying to apply styles to HTML tags dynamically by reading in the value of certain HTML attributes and applying a class name based on their values. For instance, if I have: I want to apply a class="h30" to that paragraph so that I can style it in my style sheet. I can't find any information on getting the value of an attribute that is not an id or class. Help? | I would highly recommend using something like jquery where adding classes is trivial: $("#someId").addClass("newClass"); so in your case: $("p[height='30']").addClass("h30"); so this selects all paragraph tags where the height attribute is 30 and adds the class h30 to it. | Javascript - Applying class to an HTML tag given an attribute/value I am trying to apply styles to HTML tags dynamically by reading in the value of certain HTML attributes and applying a class name based on their values. For instance, if I have: I want to apply a class="h30" to that paragraph so that I can style it in my style sheet. I can't find any information on getting the value of an attribute that is not an id or class. Help? | TITLE:
Javascript - Applying class to an HTML tag given an attribute/value
QUESTION:
I am trying to apply styles to HTML tags dynamically by reading in the value of certain HTML attributes and applying a class name based on their values. For instance, if I have: I want to apply a class="h30" to that paragraph so that I can style it in my style sheet. I can't find any information on getting the value of an attribute that is not an id or class. Help?
ANSWER:
I would highly recommend using something like jquery where adding classes is trivial: $("#someId").addClass("newClass"); so in your case: $("p[height='30']").addClass("h30"); so this selects all paragraph tags where the height attribute is 30 and adds the class h30 to it. | [
"javascript",
"html"
] | 2 | 5 | 2,865 | 4 | 0 | 2008-09-10T05:29:40.373000 | 2008-09-10T08:06:15.737000 |
53,479 | 53,507 | ASP.NET MVC vs. Web client software factory (WCSF) | I have recently been doing a bit of investigation into the different types of Model View architectures, and need to decide which one to pursue for future in-house development. As I'm currently working in a Microsoft shop that has ASP.NET skills, it seems my options are between ASP.NET MVC and WCSF (Monorail is probably out of the as it wouldn't be supported by Microsoft). After reading the ASP.NET MVC framework, using the WCSF as a yardstick, I picked up the following points: ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET MVC doesn't allow this. What are some of the other considerations? What have I misunderstood? Is there anybody out there who has used both frameworks and has advice either way? | ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You should think of WCSF as guidance about how to use the existing WebForms infrastructure, especially introducing Model-View-Presenter to help enforce separation of concerns. It also increases the testability of the resulting code. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. If you can target 3.5 SP1, you can use the new Routing system with a traditional WebForms site. Routing is not limited to MVC. For example, take a look at Dynamic Data (which also ships in 3.5 SP1). An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. This is true because it uses the new abstractions classes for HttpContext, HttpRequest, HttpResponse, etc. There's nothing inherently more testable about the MVC pattern than the MVP pattern. They're both instances of "Separated Presentation", and both increase testability. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET doesn't allow this. In Model-View-Presenter, since the outside world interacts with views (i.e., the URL points to the view), the views will naturally be responding to these events. They should be as simple as possible, either by calling the presenter or by offering events that the presenter can subscribe to. Model-View-Controller overcomes this limitation by having the outside world interact with controllers. This means your views can be a lot "dumber" about non-presentation things. As for which you should use, I think the answer comes down to which one best suits your project goals. Sometimes WebForms and the rich third party control vendor availability will be preferable, and in some cases, raw simplicity and fine-grained HTML control will favor MVC. | ASP.NET MVC vs. Web client software factory (WCSF) I have recently been doing a bit of investigation into the different types of Model View architectures, and need to decide which one to pursue for future in-house development. As I'm currently working in a Microsoft shop that has ASP.NET skills, it seems my options are between ASP.NET MVC and WCSF (Monorail is probably out of the as it wouldn't be supported by Microsoft). After reading the ASP.NET MVC framework, using the WCSF as a yardstick, I picked up the following points: ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET MVC doesn't allow this. What are some of the other considerations? What have I misunderstood? Is there anybody out there who has used both frameworks and has advice either way? | TITLE:
ASP.NET MVC vs. Web client software factory (WCSF)
QUESTION:
I have recently been doing a bit of investigation into the different types of Model View architectures, and need to decide which one to pursue for future in-house development. As I'm currently working in a Microsoft shop that has ASP.NET skills, it seems my options are between ASP.NET MVC and WCSF (Monorail is probably out of the as it wouldn't be supported by Microsoft). After reading the ASP.NET MVC framework, using the WCSF as a yardstick, I picked up the following points: ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET MVC doesn't allow this. What are some of the other considerations? What have I misunderstood? Is there anybody out there who has used both frameworks and has advice either way?
ANSWER:
ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You should think of WCSF as guidance about how to use the existing WebForms infrastructure, especially introducing Model-View-Presenter to help enforce separation of concerns. It also increases the testability of the resulting code. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. If you can target 3.5 SP1, you can use the new Routing system with a traditional WebForms site. Routing is not limited to MVC. For example, take a look at Dynamic Data (which also ships in 3.5 SP1). An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. This is true because it uses the new abstractions classes for HttpContext, HttpRequest, HttpResponse, etc. There's nothing inherently more testable about the MVC pattern than the MVP pattern. They're both instances of "Separated Presentation", and both increase testability. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET doesn't allow this. In Model-View-Presenter, since the outside world interacts with views (i.e., the URL points to the view), the views will naturally be responding to these events. They should be as simple as possible, either by calling the presenter or by offering events that the presenter can subscribe to. Model-View-Controller overcomes this limitation by having the outside world interact with controllers. This means your views can be a lot "dumber" about non-presentation things. As for which you should use, I think the answer comes down to which one best suits your project goals. Sometimes WebForms and the rich third party control vendor availability will be preferable, and in some cases, raw simplicity and fine-grained HTML control will favor MVC. | [
"asp.net",
"asp.net-mvc",
"wcsf"
] | 15 | 15 | 8,036 | 7 | 0 | 2008-09-10T05:37:26 | 2008-09-10T06:14:47.707000 |
53,480 | 53,539 | Fuzzy text (sentences/titles) matching in C# | Hey, I'm using Levenshteins algorithm to get distance between source and target string. also I have method which returns value from 0 to 1: /// /// Gets the similarity between two strings. /// All relation scores are in the [0, 1] range, /// which means that if the score gets a maximum value (equal to 1) /// then the two string are absolutely similar /// /// The string1. /// The string2. /// public static float CalculateSimilarity(String s1, String s2) { if ((s1 == null) || (s2 == null)) return 0.0f;
float dis = LevenshteinDistance.Compute(s1, s2); float maxLen = s1.Length; if (maxLen < s2.Length) maxLen = s2.Length; if (maxLen == 0.0F) return 1.0F; else return 1.0F - dis / maxLen; } but this for me is not enough. Because I need more complex way to match two sentences. For example I want automatically tag some music, I have original song names, and i have songs with trash, like super, quality, years like 2007, 2008, etc..etc.. also some files have just http://trash..thash..song_name_mp3.mp3, other are normal. I want to create an algorithm which will work just more perfect than mine now.. Maybe anyone can help me? here is my current algo: /// /// if we need to ignore this target. /// /// The target string. /// private bool doIgnore(String targetString) { if ((targetString!= null) && (targetString!= String.Empty)) { for (int i = 0; i < ignoreWordsList.Length; ++i) { //* if we found ignore word or target string matching some some special cases like years (Regex). if (targetString == ignoreWordsList[i] || (isMatchInSpecialCases(targetString))) return true; } }
return false; }
/// /// Removes the duplicates. /// /// The list. private void removeDuplicates(List list) { if ((list!= null) && (list.Count > 0)) { for (int i = 0; i < list.Count - 1; ++i) { if (list[i] == list[i + 1]) { list.RemoveAt(i); --i; } } } }
/// /// Does the fuzzy match. /// /// The target title. /// private TitleMatchResult doFuzzyMatch(String targetTitle) { TitleMatchResult matchResult = null;
if (targetTitle!= null && targetTitle!= String.Empty) { try { //* change target title (string) to lower case. targetTitle = targetTitle.ToLower();
//* scores, we will select higher score at the end. Dictionary scores = new Dictionary ();
//* do split special chars: '-', ' ', '.', ',', '?', '/', ':', ';', '%', '(', ')', '#', '\"', '\'', '!', '|', '^', '*', '[', ']', '{', '}', '=', '!', '+', '_' List targetKeywords = new List (targetTitle.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries));
//* remove all trash from keywords, like super, quality, etc.. targetKeywords.RemoveAll(delegate(String x) { return doIgnore(x); }); //* sort keywords. targetKeywords.Sort(); //* remove some duplicates. removeDuplicates(targetKeywords);
//* go through all original titles. foreach (Title sourceTitle in titles) { float tempScore = 0f; //* split orig. title to keywords list. List sourceKeywords = new List (sourceTitle.Name.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries)); sourceKeywords.Sort(); removeDuplicates(sourceKeywords);
//* go through all source ttl keywords. foreach (String keyw1 in sourceKeywords) { float max = float.MinValue; foreach (String keyw2 in targetKeywords) { float currentScore = StringMatching.StringMatching.CalculateSimilarity(keyw1.ToLower(), keyw2); if (currentScore > max) { max = currentScore; } } tempScore += max; }
//* calculate average score. float averageScore = (tempScore / Math.Max(targetKeywords.Count, sourceKeywords.Count));
//* if average score is bigger than minimal score and target title is not in this source title ignore list. if (averageScore >= minimalScore &&!sourceTitle.doIgnore(targetTitle)) { //* add score. scores.Add(sourceTitle, averageScore); } }
//* choose biggest score. float maxi = float.MinValue; foreach (KeyValuePair kvp in scores) { if (kvp.Value > maxi) { maxi = kvp.Value; matchResult = new TitleMatchResult(maxi, kvp.Key, MatchTechnique.FuzzyLogic); } } } catch { } } //* return result. return matchResult; } This works normally but just in some cases, a lot of titles which should match, does not match... I think I need some kind of formula to play with weights and etc, but i can't think of one.. Ideas? Suggestions? Algos? by the way I already know this topic (My colleague already posted it but we cannot come with a proper solution for this problem.): Approximate string matching algorithms | Your problem here may be distinguishing between noise words and useful data: Rolling_Stones.Best_of_2003.Wild_Horses.mp3 Super.Quality.Wild_Horses.mp3 Tori_Amos.Wild_Horses.mp3 You may need to produce a dictionary of noise words to ignore. That seems clunky, but I'm not sure there's an algorithm that can distinguish between band/album names and noise. | Fuzzy text (sentences/titles) matching in C# Hey, I'm using Levenshteins algorithm to get distance between source and target string. also I have method which returns value from 0 to 1: /// /// Gets the similarity between two strings. /// All relation scores are in the [0, 1] range, /// which means that if the score gets a maximum value (equal to 1) /// then the two string are absolutely similar /// /// The string1. /// The string2. /// public static float CalculateSimilarity(String s1, String s2) { if ((s1 == null) || (s2 == null)) return 0.0f;
float dis = LevenshteinDistance.Compute(s1, s2); float maxLen = s1.Length; if (maxLen < s2.Length) maxLen = s2.Length; if (maxLen == 0.0F) return 1.0F; else return 1.0F - dis / maxLen; } but this for me is not enough. Because I need more complex way to match two sentences. For example I want automatically tag some music, I have original song names, and i have songs with trash, like super, quality, years like 2007, 2008, etc..etc.. also some files have just http://trash..thash..song_name_mp3.mp3, other are normal. I want to create an algorithm which will work just more perfect than mine now.. Maybe anyone can help me? here is my current algo: /// /// if we need to ignore this target. /// /// The target string. /// private bool doIgnore(String targetString) { if ((targetString!= null) && (targetString!= String.Empty)) { for (int i = 0; i < ignoreWordsList.Length; ++i) { //* if we found ignore word or target string matching some some special cases like years (Regex). if (targetString == ignoreWordsList[i] || (isMatchInSpecialCases(targetString))) return true; } }
return false; }
/// /// Removes the duplicates. /// /// The list. private void removeDuplicates(List list) { if ((list!= null) && (list.Count > 0)) { for (int i = 0; i < list.Count - 1; ++i) { if (list[i] == list[i + 1]) { list.RemoveAt(i); --i; } } } }
/// /// Does the fuzzy match. /// /// The target title. /// private TitleMatchResult doFuzzyMatch(String targetTitle) { TitleMatchResult matchResult = null;
if (targetTitle!= null && targetTitle!= String.Empty) { try { //* change target title (string) to lower case. targetTitle = targetTitle.ToLower();
//* scores, we will select higher score at the end. Dictionary scores = new Dictionary ();
//* do split special chars: '-', ' ', '.', ',', '?', '/', ':', ';', '%', '(', ')', '#', '\"', '\'', '!', '|', '^', '*', '[', ']', '{', '}', '=', '!', '+', '_' List targetKeywords = new List (targetTitle.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries));
//* remove all trash from keywords, like super, quality, etc.. targetKeywords.RemoveAll(delegate(String x) { return doIgnore(x); }); //* sort keywords. targetKeywords.Sort(); //* remove some duplicates. removeDuplicates(targetKeywords);
//* go through all original titles. foreach (Title sourceTitle in titles) { float tempScore = 0f; //* split orig. title to keywords list. List sourceKeywords = new List (sourceTitle.Name.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries)); sourceKeywords.Sort(); removeDuplicates(sourceKeywords);
//* go through all source ttl keywords. foreach (String keyw1 in sourceKeywords) { float max = float.MinValue; foreach (String keyw2 in targetKeywords) { float currentScore = StringMatching.StringMatching.CalculateSimilarity(keyw1.ToLower(), keyw2); if (currentScore > max) { max = currentScore; } } tempScore += max; }
//* calculate average score. float averageScore = (tempScore / Math.Max(targetKeywords.Count, sourceKeywords.Count));
//* if average score is bigger than minimal score and target title is not in this source title ignore list. if (averageScore >= minimalScore &&!sourceTitle.doIgnore(targetTitle)) { //* add score. scores.Add(sourceTitle, averageScore); } }
//* choose biggest score. float maxi = float.MinValue; foreach (KeyValuePair kvp in scores) { if (kvp.Value > maxi) { maxi = kvp.Value; matchResult = new TitleMatchResult(maxi, kvp.Key, MatchTechnique.FuzzyLogic); } } } catch { } } //* return result. return matchResult; } This works normally but just in some cases, a lot of titles which should match, does not match... I think I need some kind of formula to play with weights and etc, but i can't think of one.. Ideas? Suggestions? Algos? by the way I already know this topic (My colleague already posted it but we cannot come with a proper solution for this problem.): Approximate string matching algorithms | TITLE:
Fuzzy text (sentences/titles) matching in C#
QUESTION:
Hey, I'm using Levenshteins algorithm to get distance between source and target string. also I have method which returns value from 0 to 1: /// /// Gets the similarity between two strings. /// All relation scores are in the [0, 1] range, /// which means that if the score gets a maximum value (equal to 1) /// then the two string are absolutely similar /// /// The string1. /// The string2. /// public static float CalculateSimilarity(String s1, String s2) { if ((s1 == null) || (s2 == null)) return 0.0f;
float dis = LevenshteinDistance.Compute(s1, s2); float maxLen = s1.Length; if (maxLen < s2.Length) maxLen = s2.Length; if (maxLen == 0.0F) return 1.0F; else return 1.0F - dis / maxLen; } but this for me is not enough. Because I need more complex way to match two sentences. For example I want automatically tag some music, I have original song names, and i have songs with trash, like super, quality, years like 2007, 2008, etc..etc.. also some files have just http://trash..thash..song_name_mp3.mp3, other are normal. I want to create an algorithm which will work just more perfect than mine now.. Maybe anyone can help me? here is my current algo: /// /// if we need to ignore this target. /// /// The target string. /// private bool doIgnore(String targetString) { if ((targetString!= null) && (targetString!= String.Empty)) { for (int i = 0; i < ignoreWordsList.Length; ++i) { //* if we found ignore word or target string matching some some special cases like years (Regex). if (targetString == ignoreWordsList[i] || (isMatchInSpecialCases(targetString))) return true; } }
return false; }
/// /// Removes the duplicates. /// /// The list. private void removeDuplicates(List list) { if ((list!= null) && (list.Count > 0)) { for (int i = 0; i < list.Count - 1; ++i) { if (list[i] == list[i + 1]) { list.RemoveAt(i); --i; } } } }
/// /// Does the fuzzy match. /// /// The target title. /// private TitleMatchResult doFuzzyMatch(String targetTitle) { TitleMatchResult matchResult = null;
if (targetTitle!= null && targetTitle!= String.Empty) { try { //* change target title (string) to lower case. targetTitle = targetTitle.ToLower();
//* scores, we will select higher score at the end. Dictionary scores = new Dictionary ();
//* do split special chars: '-', ' ', '.', ',', '?', '/', ':', ';', '%', '(', ')', '#', '\"', '\'', '!', '|', '^', '*', '[', ']', '{', '}', '=', '!', '+', '_' List targetKeywords = new List (targetTitle.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries));
//* remove all trash from keywords, like super, quality, etc.. targetKeywords.RemoveAll(delegate(String x) { return doIgnore(x); }); //* sort keywords. targetKeywords.Sort(); //* remove some duplicates. removeDuplicates(targetKeywords);
//* go through all original titles. foreach (Title sourceTitle in titles) { float tempScore = 0f; //* split orig. title to keywords list. List sourceKeywords = new List (sourceTitle.Name.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries)); sourceKeywords.Sort(); removeDuplicates(sourceKeywords);
//* go through all source ttl keywords. foreach (String keyw1 in sourceKeywords) { float max = float.MinValue; foreach (String keyw2 in targetKeywords) { float currentScore = StringMatching.StringMatching.CalculateSimilarity(keyw1.ToLower(), keyw2); if (currentScore > max) { max = currentScore; } } tempScore += max; }
//* calculate average score. float averageScore = (tempScore / Math.Max(targetKeywords.Count, sourceKeywords.Count));
//* if average score is bigger than minimal score and target title is not in this source title ignore list. if (averageScore >= minimalScore &&!sourceTitle.doIgnore(targetTitle)) { //* add score. scores.Add(sourceTitle, averageScore); } }
//* choose biggest score. float maxi = float.MinValue; foreach (KeyValuePair kvp in scores) { if (kvp.Value > maxi) { maxi = kvp.Value; matchResult = new TitleMatchResult(maxi, kvp.Key, MatchTechnique.FuzzyLogic); } } } catch { } } //* return result. return matchResult; } This works normally but just in some cases, a lot of titles which should match, does not match... I think I need some kind of formula to play with weights and etc, but i can't think of one.. Ideas? Suggestions? Algos? by the way I already know this topic (My colleague already posted it but we cannot come with a proper solution for this problem.): Approximate string matching algorithms
ANSWER:
Your problem here may be distinguishing between noise words and useful data: Rolling_Stones.Best_of_2003.Wild_Horses.mp3 Super.Quality.Wild_Horses.mp3 Tori_Amos.Wild_Horses.mp3 You may need to produce a dictionary of noise words to ignore. That seems clunky, but I'm not sure there's an algorithm that can distinguish between band/album names and noise. | [
"c#",
"algorithm",
"string",
"fuzzy-search"
] | 22 | 6 | 23,884 | 5 | 0 | 2008-09-10T05:37:30.120000 | 2008-09-10T06:59:10.390000 |
53,482 | 53,567 | How do I change the password of the root user in MySQL? | I have long since forgotten the password for the root user on one of my boxes. Is there a way I can change it without having to log in to the instance, or will I have to reinstall? | Step 1 Stop database: shell> /etc/init.d/mysql stop Step 2 Restart database without password autentification without connection to the network Access to database is only possible through it's sock file '/var/lib/mysql/mysql.sock'. shell> mysqld --user=mysql --pid-file=/var/lib/mysql/mysqld.pid \ --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --skip-grant-tables --skip-networking & Step 3 Connect to the database and change password: shell> mysql --database mysql --socket=/var/lib/mysql/mysql.sock If you want to, show all users: mysql> select User, password from user; Set new password: mysql> update user set password=password('NEW PASS') WHERE User='USERNAME'; Leave database connection: mysql> exit Step 4 Restart database server "normally". shell> kill `cat /var/lib/mysql/mysqld.pid` shell> /etc/init.d/mysql start | How do I change the password of the root user in MySQL? I have long since forgotten the password for the root user on one of my boxes. Is there a way I can change it without having to log in to the instance, or will I have to reinstall? | TITLE:
How do I change the password of the root user in MySQL?
QUESTION:
I have long since forgotten the password for the root user on one of my boxes. Is there a way I can change it without having to log in to the instance, or will I have to reinstall?
ANSWER:
Step 1 Stop database: shell> /etc/init.d/mysql stop Step 2 Restart database without password autentification without connection to the network Access to database is only possible through it's sock file '/var/lib/mysql/mysql.sock'. shell> mysqld --user=mysql --pid-file=/var/lib/mysql/mysqld.pid \ --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --skip-grant-tables --skip-networking & Step 3 Connect to the database and change password: shell> mysql --database mysql --socket=/var/lib/mysql/mysql.sock If you want to, show all users: mysql> select User, password from user; Set new password: mysql> update user set password=password('NEW PASS') WHERE User='USERNAME'; Leave database connection: mysql> exit Step 4 Restart database server "normally". shell> kill `cat /var/lib/mysql/mysqld.pid` shell> /etc/init.d/mysql start | [
"mysql"
] | 3 | 4 | 2,043 | 3 | 0 | 2008-09-10T05:41:34.790000 | 2008-09-10T07:43:12.640000 |
53,491 | 53,512 | How do I enable external access to MySQL Server? | How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success. | You probably have to edit the configuration file (usually my.cnf) to listen in the external interface instead of on localhost only. Change the bind-address parameter to your machine's IP address. If this is an old MySQL installation, you should comment out the skip-networking parameter. Afterwards, restart MySQL and you'll be set | How do I enable external access to MySQL Server? How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success. | TITLE:
How do I enable external access to MySQL Server?
QUESTION:
How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success.
ANSWER:
You probably have to edit the configuration file (usually my.cnf) to listen in the external interface instead of on localhost only. Change the bind-address parameter to your machine's IP address. If this is an old MySQL installation, you should comment out the skip-networking parameter. Afterwards, restart MySQL and you'll be set | [
"mysql"
] | 9 | 22 | 27,528 | 2 | 0 | 2008-09-10T05:52:21.627000 | 2008-09-10T06:19:52.030000 |
53,501 | 53,526 | Is it possible to determine which process starts my .Net application? | I am developing console application in.Net and I want to change a behavior a little based on information that application was started from cmd.exe or from explorer.exe. Is it possible? | Process this_process = Process.GetCurrentProcess(); int parent_pid = 0; using (ManagementObject MgmtObj = new ManagementObject("win32_process.handle='" + this_process.Id.ToString() + "'")) { MgmtObj.Get(); parent_pid = Convert.ToInt32(MgmtObj["ParentProcessId"]); } string parent_process_name = Process.GetProcessById(parent_pid).ProcessName; | Is it possible to determine which process starts my .Net application? I am developing console application in.Net and I want to change a behavior a little based on information that application was started from cmd.exe or from explorer.exe. Is it possible? | TITLE:
Is it possible to determine which process starts my .Net application?
QUESTION:
I am developing console application in.Net and I want to change a behavior a little based on information that application was started from cmd.exe or from explorer.exe. Is it possible?
ANSWER:
Process this_process = Process.GetCurrentProcess(); int parent_pid = 0; using (ManagementObject MgmtObj = new ManagementObject("win32_process.handle='" + this_process.Id.ToString() + "'")) { MgmtObj.Get(); parent_pid = Convert.ToInt32(MgmtObj["ParentProcessId"]); } string parent_process_name = Process.GetProcessById(parent_pid).ProcessName; | [
".net",
"windows",
"process-management"
] | 8 | 9 | 354 | 3 | 0 | 2008-09-10T06:04:13.237000 | 2008-09-10T06:32:40.160000 |
53,513 | 53,522 | How do I check if a list is empty? | For example, if passed the following: a = [] How do I check to see if a is empty? | if not a: print("List is empty") Using the implicit booleanness of the empty list is quite Pythonic. | How do I check if a list is empty? For example, if passed the following: a = [] How do I check to see if a is empty? | TITLE:
How do I check if a list is empty?
QUESTION:
For example, if passed the following: a = [] How do I check to see if a is empty?
ANSWER:
if not a: print("List is empty") Using the implicit booleanness of the empty list is quite Pythonic. | [
"python",
"list"
] | 3,226 | 7,148 | 4,861,827 | 27 | 0 | 2008-09-10T06:20:11 | 2008-09-10T06:28:05.147000 |
53,527 | 53,710 | How can you unit test a database in SQL Server? | What are some of the ways? What frameworks can you use? | Here's a project which does this: http://tsqlunit.sourceforge.net/ Also, Visual Studio Team System for DBA has built-in support for unit testing of Databases. | How can you unit test a database in SQL Server? What are some of the ways? What frameworks can you use? | TITLE:
How can you unit test a database in SQL Server?
QUESTION:
What are some of the ways? What frameworks can you use?
ANSWER:
Here's a project which does this: http://tsqlunit.sourceforge.net/ Also, Visual Studio Team System for DBA has built-in support for unit testing of Databases. | [
"sql-server",
"unit-testing"
] | 2 | 2 | 603 | 3 | 0 | 2008-09-10T06:33:55.443000 | 2008-09-10T09:44:24.410000 |
53,532 | 85,947 | Unit-testing servlets | I have a bunch of servlets running under the Tomcat servlet container. I would like to separate test code from production code, so I considered using a test framework. JUnit is nicely integrated into Eclipse, but I failed to make it run servlets using a running Tomcat server. Could you please recommend a unit testing framework that supports testing Tomcat servlets? Eclipse integration is nice but not necessary. | Check out ServletUnit, which is part of HttpUnit. In a nutshell, ServletUnit provides a library of mocks and utilities you can use in ordinary JUnit tests to mock out a servlet container and other servlet-related objects like request and response objects. The link above contains examples. | Unit-testing servlets I have a bunch of servlets running under the Tomcat servlet container. I would like to separate test code from production code, so I considered using a test framework. JUnit is nicely integrated into Eclipse, but I failed to make it run servlets using a running Tomcat server. Could you please recommend a unit testing framework that supports testing Tomcat servlets? Eclipse integration is nice but not necessary. | TITLE:
Unit-testing servlets
QUESTION:
I have a bunch of servlets running under the Tomcat servlet container. I would like to separate test code from production code, so I considered using a test framework. JUnit is nicely integrated into Eclipse, but I failed to make it run servlets using a running Tomcat server. Could you please recommend a unit testing framework that supports testing Tomcat servlets? Eclipse integration is nice but not necessary.
ANSWER:
Check out ServletUnit, which is part of HttpUnit. In a nutshell, ServletUnit provides a library of mocks and utilities you can use in ordinary JUnit tests to mock out a servlet container and other servlet-related objects like request and response objects. The link above contains examples. | [
"eclipse",
"unit-testing",
"tomcat",
"servlets",
"junit"
] | 14 | 9 | 14,752 | 6 | 0 | 2008-09-10T06:42:34.223000 | 2008-09-17T18:06:02.057000 |
53,538 | 53,540 | SQL Server Freetext match - how do I sort by relevance | Is it possible to order results in SQL Server 2005 by the relevance of a freetext match? In MySQL you can use the (roughly equivalent) MATCH function in the ORDER BY section, but I haven't found any equivalence in SQL Server. From the MySQL docs: For each row in the table, MATCH() returns a relevance value; that is, a similarity measure between the search string and the text in that row in the columns named in the MATCH() list. So for example you could order by the number of votes, then this relevance, and finally by a creation date. Is this something that can be done, or am I stuck with just returning the matching values and not having this ordering ability? | If you are using FREETEXTTABLE then it returns a column name Rank, so order by Rank should work. I don't know if other freetext search methods are also returning this value or not. You can have a try. | SQL Server Freetext match - how do I sort by relevance Is it possible to order results in SQL Server 2005 by the relevance of a freetext match? In MySQL you can use the (roughly equivalent) MATCH function in the ORDER BY section, but I haven't found any equivalence in SQL Server. From the MySQL docs: For each row in the table, MATCH() returns a relevance value; that is, a similarity measure between the search string and the text in that row in the columns named in the MATCH() list. So for example you could order by the number of votes, then this relevance, and finally by a creation date. Is this something that can be done, or am I stuck with just returning the matching values and not having this ordering ability? | TITLE:
SQL Server Freetext match - how do I sort by relevance
QUESTION:
Is it possible to order results in SQL Server 2005 by the relevance of a freetext match? In MySQL you can use the (roughly equivalent) MATCH function in the ORDER BY section, but I haven't found any equivalence in SQL Server. From the MySQL docs: For each row in the table, MATCH() returns a relevance value; that is, a similarity measure between the search string and the text in that row in the columns named in the MATCH() list. So for example you could order by the number of votes, then this relevance, and finally by a creation date. Is this something that can be done, or am I stuck with just returning the matching values and not having this ordering ability?
ANSWER:
If you are using FREETEXTTABLE then it returns a column name Rank, so order by Rank should work. I don't know if other freetext search methods are also returning this value or not. You can have a try. | [
"sql",
"sql-server",
"full-text-search",
"freetext"
] | 8 | 4 | 4,593 | 2 | 0 | 2008-09-10T06:56:15.860000 | 2008-09-10T07:01:57.857000 |
53,545 | 53,776 | Get the App.Config of another Exe | I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like. I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties. | After some testing, I found a way to do this. Add the App.Config file to the test project. Use "Add as a link" option. Use System.Configuration.ConfigurationManager.AppSettings["myKey"] to access the value. | Get the App.Config of another Exe I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like. I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties. | TITLE:
Get the App.Config of another Exe
QUESTION:
I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like. I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties.
ANSWER:
After some testing, I found a way to do this. Add the App.Config file to the test project. Use "Add as a link" option. Use System.Configuration.ConfigurationManager.AppSettings["myKey"] to access the value. | [
"c#",
".net",
"configuration-files",
"appsettings",
"system.configuration"
] | 23 | 6 | 27,150 | 6 | 0 | 2008-09-10T07:05:13.450000 | 2008-09-10T11:00:09.720000 |
53,555 | 53,560 | How would you go about for switching a site from Prototype to jQuery | I have written a site in Prototype but want to switch to jQuery. Any ideas on how best make the switch? | Personally, I like to take things in steps, so I would start by using both, like this: jQuery.noConflict();
// Put all your code in your document ready area jQuery(document).ready(function($){ // Do jQuery stuff using $ $("div").hide(); });
// Use Prototype with $(...), etc. $('someid').hide(); That way you don't have to convert all your old code at once, but can start using jquery on new stuff, and migrate your old Prototype code when ever it's convenient. I don't know the size of your project, so I can't say whether or not this applies to you, but Spolsky had a great article about "The big rewrite" and why it's such a bad idea in Things you should never do, Part 1. It's well worth a read! For more on using jquery with Prototype, see Using jQuery with other libraries in the jquery docs. | How would you go about for switching a site from Prototype to jQuery I have written a site in Prototype but want to switch to jQuery. Any ideas on how best make the switch? | TITLE:
How would you go about for switching a site from Prototype to jQuery
QUESTION:
I have written a site in Prototype but want to switch to jQuery. Any ideas on how best make the switch?
ANSWER:
Personally, I like to take things in steps, so I would start by using both, like this: jQuery.noConflict();
// Put all your code in your document ready area jQuery(document).ready(function($){ // Do jQuery stuff using $ $("div").hide(); });
// Use Prototype with $(...), etc. $('someid').hide(); That way you don't have to convert all your old code at once, but can start using jquery on new stuff, and migrate your old Prototype code when ever it's convenient. I don't know the size of your project, so I can't say whether or not this applies to you, but Spolsky had a great article about "The big rewrite" and why it's such a bad idea in Things you should never do, Part 1. It's well worth a read! For more on using jquery with Prototype, see Using jQuery with other libraries in the jquery docs. | [
"javascript",
"jquery",
"prototypejs"
] | 8 | 11 | 1,027 | 1 | 0 | 2008-09-10T07:16:30.770000 | 2008-09-10T07:25:44.663000 |
53,562 | 53,992 | Enabling Hibernate second-level cache with JPA on JBoss 4.2 | What are the steps required to enable Hibernate's second-level cache, when using the Java Persistence API (annotated entities)? How do I check that it's working? I'm using JBoss 4.2.2.GA. From the Hibernate documentation, it seems that I need to enable the cache and specify a cache provider in persistence.xml, like: What else is required? Do I need to add @Cache annotations to my JPA entities? How can I tell if the cache is working? I have tried accessing cache statistics after running a Query, but Statistics.getSecondLevelCacheStatistics returns null, perhaps because I don't know what 'region' name to use. | I believe you need to add the cache annotations to tell hibernate how to use the second-level cache (read-only, read-write, etc). This was the case in my app (using spring / traditional hibernate and ehcache, so your mileage may vary). Once the caches were indicated, I started seeing messages that they were in use from hibernate. | Enabling Hibernate second-level cache with JPA on JBoss 4.2 What are the steps required to enable Hibernate's second-level cache, when using the Java Persistence API (annotated entities)? How do I check that it's working? I'm using JBoss 4.2.2.GA. From the Hibernate documentation, it seems that I need to enable the cache and specify a cache provider in persistence.xml, like: What else is required? Do I need to add @Cache annotations to my JPA entities? How can I tell if the cache is working? I have tried accessing cache statistics after running a Query, but Statistics.getSecondLevelCacheStatistics returns null, perhaps because I don't know what 'region' name to use. | TITLE:
Enabling Hibernate second-level cache with JPA on JBoss 4.2
QUESTION:
What are the steps required to enable Hibernate's second-level cache, when using the Java Persistence API (annotated entities)? How do I check that it's working? I'm using JBoss 4.2.2.GA. From the Hibernate documentation, it seems that I need to enable the cache and specify a cache provider in persistence.xml, like: What else is required? Do I need to add @Cache annotations to my JPA entities? How can I tell if the cache is working? I have tried accessing cache statistics after running a Query, but Statistics.getSecondLevelCacheStatistics returns null, perhaps because I don't know what 'region' name to use.
ANSWER:
I believe you need to add the cache annotations to tell hibernate how to use the second-level cache (read-only, read-write, etc). This was the case in my app (using spring / traditional hibernate and ehcache, so your mileage may vary). Once the caches were indicated, I started seeing messages that they were in use from hibernate. | [
"java",
"hibernate",
"jpa",
"caching"
] | 9 | 3 | 10,774 | 2 | 0 | 2008-09-10T07:32:38.657000 | 2008-09-10T13:10:03.947000 |
53,569 | 53,573 | How to get the changes on a branch in Git | What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that git diff A...B is equivalent to git diff $(git-merge-base A B) B. On the other hand, the documentation for git-rev-parse indicates that r1...r2 is defined as r1 r2 --not $(git merge-base --all r1 r2). Why are these different? Note that git diff HEAD...branch gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. git diff HEAD...branch gives these commits however, git log HEAD...branch gives x, y, z, c, d, e. | In the context of a revision list, A...B is how git-rev-parse defines it. git-log takes a revision list. git-diff does not take a list of revisions - it takes one or two revisions, and has defined the A...B syntax to mean how it's defined in the git-diff manpage. If git-diff did not explicitly define A...B, then that syntax would be invalid. Note that the git-rev-parse manpage describes A...B in the "Specifying Ranges" section, and everything in that section is only valid in situations where a revision range is valid (i.e. when a revision list is desired). To get a log containing just x, y, and z, try git log HEAD..branch (two dots, not three). This is identical to git log branch --not HEAD, and means all commits on branch that aren't on HEAD. | How to get the changes on a branch in Git What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that git diff A...B is equivalent to git diff $(git-merge-base A B) B. On the other hand, the documentation for git-rev-parse indicates that r1...r2 is defined as r1 r2 --not $(git merge-base --all r1 r2). Why are these different? Note that git diff HEAD...branch gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. git diff HEAD...branch gives these commits however, git log HEAD...branch gives x, y, z, c, d, e. | TITLE:
How to get the changes on a branch in Git
QUESTION:
What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that git diff A...B is equivalent to git diff $(git-merge-base A B) B. On the other hand, the documentation for git-rev-parse indicates that r1...r2 is defined as r1 r2 --not $(git merge-base --all r1 r2). Why are these different? Note that git diff HEAD...branch gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. git diff HEAD...branch gives these commits however, git log HEAD...branch gives x, y, z, c, d, e.
ANSWER:
In the context of a revision list, A...B is how git-rev-parse defines it. git-log takes a revision list. git-diff does not take a list of revisions - it takes one or two revisions, and has defined the A...B syntax to mean how it's defined in the git-diff manpage. If git-diff did not explicitly define A...B, then that syntax would be invalid. Note that the git-rev-parse manpage describes A...B in the "Specifying Ranges" section, and everything in that section is only valid in situations where a revision range is valid (i.e. when a revision list is desired). To get a log containing just x, y, and z, try git log HEAD..branch (two dots, not three). This is identical to git log branch --not HEAD, and means all commits on branch that aren't on HEAD. | [
"git",
"branch"
] | 295 | 205 | 288,369 | 10 | 0 | 2008-09-10T07:44:53.870000 | 2008-09-10T07:50:41.173000 |
53,583 | 53,608 | NMBLookup OS X returning inconsistant results | We're trying to get SMB volume listings in our OS X application, and have been using NMBLookup, as suggested by Apple, to get listings. However, more often than not, we're not able to get a full listing of available SMB volumes using the tool. We've got a good benchmark in that we can see the full listing the Apple Finder gets, and the majority of the time, our listing is not matching up, usually missing servers. We're tried a number ways of executing the command, but haven't yet found anything that brings us back a complete listing. nmblookup -M -- - nmblookup '*' etc Does anyone know what we could be doing wrong, or know of a better way to query for SMB volumes available on local subnets? | This work fairly well in our network. The point is to use smbclient -L on each of the entries returned by nmblookup: nmblookup -M -- - | grep -v querying | while read sw do echo $sw | awk -F' ' '{print $1}' | xargs smbclient -L done Edit: @paul - now I see what you mean - a vista has just joined our network and the Finder shows it but not nmblookup, but smbclient shows it in the "Server" section. smbclient has a "Server" section where it lists the machines found on the network. The command line I use is: smbclient -L 192.168.0.4 //the IP as returned by nmblookup of the master browser
cristi:~ diciu$ smbclient -L 192.168.0.4 Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5] Sharename Type Comment --------- ---- ------- internal Disk some share [..] Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5]
Server Comment --------- ------- MMM Vista box not showing up in nmblookup | NMBLookup OS X returning inconsistant results We're trying to get SMB volume listings in our OS X application, and have been using NMBLookup, as suggested by Apple, to get listings. However, more often than not, we're not able to get a full listing of available SMB volumes using the tool. We've got a good benchmark in that we can see the full listing the Apple Finder gets, and the majority of the time, our listing is not matching up, usually missing servers. We're tried a number ways of executing the command, but haven't yet found anything that brings us back a complete listing. nmblookup -M -- - nmblookup '*' etc Does anyone know what we could be doing wrong, or know of a better way to query for SMB volumes available on local subnets? | TITLE:
NMBLookup OS X returning inconsistant results
QUESTION:
We're trying to get SMB volume listings in our OS X application, and have been using NMBLookup, as suggested by Apple, to get listings. However, more often than not, we're not able to get a full listing of available SMB volumes using the tool. We've got a good benchmark in that we can see the full listing the Apple Finder gets, and the majority of the time, our listing is not matching up, usually missing servers. We're tried a number ways of executing the command, but haven't yet found anything that brings us back a complete listing. nmblookup -M -- - nmblookup '*' etc Does anyone know what we could be doing wrong, or know of a better way to query for SMB volumes available on local subnets?
ANSWER:
This work fairly well in our network. The point is to use smbclient -L on each of the entries returned by nmblookup: nmblookup -M -- - | grep -v querying | while read sw do echo $sw | awk -F' ' '{print $1}' | xargs smbclient -L done Edit: @paul - now I see what you mean - a vista has just joined our network and the Finder shows it but not nmblookup, but smbclient shows it in the "Server" section. smbclient has a "Server" section where it lists the machines found on the network. The command line I use is: smbclient -L 192.168.0.4 //the IP as returned by nmblookup of the master browser
cristi:~ diciu$ smbclient -L 192.168.0.4 Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5] Sharename Type Comment --------- ---- ------- internal Disk some share [..] Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5]
Server Comment --------- ------- MMM Vista box not showing up in nmblookup | [
"macos",
"finder",
"smb"
] | 2 | 2 | 2,150 | 1 | 0 | 2008-09-10T07:58:07.223000 | 2008-09-10T08:17:49.173000 |
53,597 | 53,641 | How do I compose existing Linq Expressions | I want to compose the results of two Linq Expressions. They exist in the form Expression > So the two that I want to compose are essentially delegates on a parameter (of type T) that both return a boolean. The result I would like composed would be the logical evaluation of the booleans. I would probably implement it as an extension method so my syntax would be something like: Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28; Expression > composedExpression = expression1.And(expression2); And later on in my code I want to evaluate the composed expression var user = new User(); bool evaluated = composedExpression.Compile().Invoke(user); I have poked around with a few different ideas but I fear that it is more complex than I had hoped. How is this done? | Here is an example: var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28};
Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28;
var invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast ());
var result = Expression.Lambda >(Expression.And(expression1.Body, invokedExpression), expression1.Parameters);
Console.WriteLine(result.Compile().Invoke(user1)); // true Console.WriteLine(result.Compile().Invoke(user2)); // false You can reuse this code via extension methods: class User { public string Name { get; set; } public int Age { get; set; } }
public static class PredicateExtensions { public static Expression > And (this Expression > expression1,Expression > expression2) { InvocationExpression invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast ());
return Expression.Lambda >(Expression.And(expression1.Body, invokedExpression), expression1.Parameters); } }
class Program { static void Main(string[] args) { var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28};
Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28;
var result = expression1.And(expression2);
Console.WriteLine(result.Compile().Invoke(user1)); Console.WriteLine(result.Compile().Invoke(user2)); } } | How do I compose existing Linq Expressions I want to compose the results of two Linq Expressions. They exist in the form Expression > So the two that I want to compose are essentially delegates on a parameter (of type T) that both return a boolean. The result I would like composed would be the logical evaluation of the booleans. I would probably implement it as an extension method so my syntax would be something like: Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28; Expression > composedExpression = expression1.And(expression2); And later on in my code I want to evaluate the composed expression var user = new User(); bool evaluated = composedExpression.Compile().Invoke(user); I have poked around with a few different ideas but I fear that it is more complex than I had hoped. How is this done? | TITLE:
How do I compose existing Linq Expressions
QUESTION:
I want to compose the results of two Linq Expressions. They exist in the form Expression > So the two that I want to compose are essentially delegates on a parameter (of type T) that both return a boolean. The result I would like composed would be the logical evaluation of the booleans. I would probably implement it as an extension method so my syntax would be something like: Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28; Expression > composedExpression = expression1.And(expression2); And later on in my code I want to evaluate the composed expression var user = new User(); bool evaluated = composedExpression.Compile().Invoke(user); I have poked around with a few different ideas but I fear that it is more complex than I had hoped. How is this done?
ANSWER:
Here is an example: var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28};
Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28;
var invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast ());
var result = Expression.Lambda >(Expression.And(expression1.Body, invokedExpression), expression1.Parameters);
Console.WriteLine(result.Compile().Invoke(user1)); // true Console.WriteLine(result.Compile().Invoke(user2)); // false You can reuse this code via extension methods: class User { public string Name { get; set; } public int Age { get; set; } }
public static class PredicateExtensions { public static Expression > And (this Expression > expression1,Expression > expression2) { InvocationExpression invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast ());
return Expression.Lambda >(Expression.And(expression1.Body, invokedExpression), expression1.Parameters); } }
class Program { static void Main(string[] args) { var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28};
Expression > expression1 = t => t.Name == "steve"; Expression > expression2 = t => t.Age == 28;
var result = expression1.And(expression2);
Console.WriteLine(result.Compile().Invoke(user1)); Console.WriteLine(result.Compile().Invoke(user2)); } } | [
"c#",
"linq"
] | 16 | 20 | 11,285 | 1 | 0 | 2008-09-10T08:08:55.340000 | 2008-09-10T08:38:22.883000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.