question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,715 | 17,730 | When is a file just a file? | So, you're writing a web application and you have several areas of the site where the user can upload files. My basic working method for this is to store the actual file on the server, and have a database table that connects the stored filename to the record it relates to. My question is this: Should there be a different table for each "type" of file? Also, should the files be stored in context-related locations on the server, or all together? Some examples: user profile photos, job application CVs, related documents on CMS pages, etc. | From your example, there is an argument for two tables, as you have files that can be associated with two different things. CVs, photos are associated with a user. attachments are associated with a CMS page. If you put these in one table, (and you want to allow users to have more than one photo or cv) then you need two link-tables to associate files->users and files->cms_pages. Arguably this implies a HABTM relationship, which is not correct and allows for inconsistent data. The two table approach is slightly cleaner and only allows files to be associated with the correct type of entity with a simple belongsTo relationship. But I don't think there is any "right" answer to this question, unless you need to store different types of metadata for different filetypes. Also be sure to store, or be able to calculate, the mimetype for each file so it can be served correctly back to the browser, with the correct HTTP headers. | When is a file just a file? So, you're writing a web application and you have several areas of the site where the user can upload files. My basic working method for this is to store the actual file on the server, and have a database table that connects the stored filename to the record it relates to. My question is this: Should there be a different table for each "type" of file? Also, should the files be stored in context-related locations on the server, or all together? Some examples: user profile photos, job application CVs, related documents on CMS pages, etc. | TITLE:
When is a file just a file?
QUESTION:
So, you're writing a web application and you have several areas of the site where the user can upload files. My basic working method for this is to store the actual file on the server, and have a database table that connects the stored filename to the record it relates to. My question is this: Should there be a different table for each "type" of file? Also, should the files be stored in context-related locations on the server, or all together? Some examples: user profile photos, job application CVs, related documents on CMS pages, etc.
ANSWER:
From your example, there is an argument for two tables, as you have files that can be associated with two different things. CVs, photos are associated with a user. attachments are associated with a CMS page. If you put these in one table, (and you want to allow users to have more than one photo or cv) then you need two link-tables to associate files->users and files->cms_pages. Arguably this implies a HABTM relationship, which is not correct and allows for inconsistent data. The two table approach is slightly cleaner and only allows files to be associated with the correct type of entity with a simple belongsTo relationship. But I don't think there is any "right" answer to this question, unless you need to store different types of metadata for different filetypes. Also be sure to store, or be able to calculate, the mimetype for each file so it can be served correctly back to the browser, with the correct HTTP headers. | [
"database-design"
] | 5 | 3 | 304 | 4 | 0 | 2008-08-20T10:38:45.653000 | 2008-08-20T10:58:09.057000 |
17,721 | 22,796 | Experience with Hadoop? | Have any of you tried Hadoop? Can it be used without the distributed filesystem that goes with it, in a Share-nothing architecture? Would that make sense? I'm also interested into any performance results you have... | Yes, you can use Hadoop on a local filesystem by using file URIs instead of hdfs URIs in various places. I think a lot of the examples that come with Hadoop do this. This is probably fine if you just want to learn how Hadoop works and the basic map-reduce paradigm, but you will need multiple machines and a distributed filesystem to get the real benefits of the scalability inherent in the architecture. | Experience with Hadoop? Have any of you tried Hadoop? Can it be used without the distributed filesystem that goes with it, in a Share-nothing architecture? Would that make sense? I'm also interested into any performance results you have... | TITLE:
Experience with Hadoop?
QUESTION:
Have any of you tried Hadoop? Can it be used without the distributed filesystem that goes with it, in a Share-nothing architecture? Would that make sense? I'm also interested into any performance results you have...
ANSWER:
Yes, you can use Hadoop on a local filesystem by using file URIs instead of hdfs URIs in various places. I think a lot of the examples that come with Hadoop do this. This is probably fine if you just want to learn how Hadoop works and the basic map-reduce paradigm, but you will need multiple machines and a distributed filesystem to get the real benefits of the scalability inherent in the architecture. | [
"performance",
"distributed",
"hadoop",
"shared-nothing"
] | 8 | 8 | 3,997 | 9 | 0 | 2008-08-20T10:43:13.227000 | 2008-08-22T16:29:08.793000 |
17,725 | 17,745 | Large, Complex Objects as a Web Service Result | Hello again ladies and gents! OK, following on from my other question on ASP.NET Web Service Results, Proxy Classes and Type Conversion. I've come to a part in my project where I need to get my thinking cap on. Basically, we have a large, complex custom object that needs to be returned from a Web Service and consumed in the client application. Now, based on the previous discussion, we know this is going to then take the form of the proxy class(es) as the return type. To overcome this, we need to basically copy the properties from one to the other. In this case, that is something that I would really, really, really! like to avoid! So, it got me thinking, how else could we do this? My current thoughts are to enable the object for complete serialization to XML and then return the XML as a string from the Web Service. We then de-serialize at the client. This will mean a fair bit of attribute decorating, but at least the code at both endpoints will be light, namely by just using the.NET XML Serializer. What are your thoughts on this? | The.Net XML (de)serialisation is pretty nicely implemented. At first thought, I don't think this is a bad idea at all. If the two applications import the same C# class(es) definition(s), then this is a relatively nice way of getting copy-constructor behaviour for free. If the class structure changes, then everything will work when both sides get the new class definition, without needing to make any additional changes on the web-service consumption/construction side. There's a slight overhead in marshalling and demarshalling the XML, but that is probably dwarved by the overhead of the remote web service call..Net XML serialisation is well understood by most programmers and should produce an easy to maintain solution. | Large, Complex Objects as a Web Service Result Hello again ladies and gents! OK, following on from my other question on ASP.NET Web Service Results, Proxy Classes and Type Conversion. I've come to a part in my project where I need to get my thinking cap on. Basically, we have a large, complex custom object that needs to be returned from a Web Service and consumed in the client application. Now, based on the previous discussion, we know this is going to then take the form of the proxy class(es) as the return type. To overcome this, we need to basically copy the properties from one to the other. In this case, that is something that I would really, really, really! like to avoid! So, it got me thinking, how else could we do this? My current thoughts are to enable the object for complete serialization to XML and then return the XML as a string from the Web Service. We then de-serialize at the client. This will mean a fair bit of attribute decorating, but at least the code at both endpoints will be light, namely by just using the.NET XML Serializer. What are your thoughts on this? | TITLE:
Large, Complex Objects as a Web Service Result
QUESTION:
Hello again ladies and gents! OK, following on from my other question on ASP.NET Web Service Results, Proxy Classes and Type Conversion. I've come to a part in my project where I need to get my thinking cap on. Basically, we have a large, complex custom object that needs to be returned from a Web Service and consumed in the client application. Now, based on the previous discussion, we know this is going to then take the form of the proxy class(es) as the return type. To overcome this, we need to basically copy the properties from one to the other. In this case, that is something that I would really, really, really! like to avoid! So, it got me thinking, how else could we do this? My current thoughts are to enable the object for complete serialization to XML and then return the XML as a string from the Web Service. We then de-serialize at the client. This will mean a fair bit of attribute decorating, but at least the code at both endpoints will be light, namely by just using the.NET XML Serializer. What are your thoughts on this?
ANSWER:
The.Net XML (de)serialisation is pretty nicely implemented. At first thought, I don't think this is a bad idea at all. If the two applications import the same C# class(es) definition(s), then this is a relatively nice way of getting copy-constructor behaviour for free. If the class structure changes, then everything will work when both sides get the new class definition, without needing to make any additional changes on the web-service consumption/construction side. There's a slight overhead in marshalling and demarshalling the XML, but that is probably dwarved by the overhead of the remote web service call..Net XML serialisation is well understood by most programmers and should produce an easy to maintain solution. | [
"c#",
"asp.net",
"xml",
"web-services",
"serialization"
] | 15 | 5 | 3,203 | 4 | 0 | 2008-08-20T10:46:20.723000 | 2008-08-20T11:12:00.193000 |
17,735 | 17,738 | Repository organisation | When I first started using revision control systems like CVS and SVN, I didn't really understand the concepts of the "trunk", branching, merging and tagging. I'm now starting to understand these concepts, and really get the importance and power behind them. So, I'm starting to do it properly. Or so I think... This is what I understand so far: The latest release/stable version of your code should sit in /trunk/ while beta versions or bleeding edge versions sit inside the /branches/ directory as different directories for each beta release, and then merged into the trunk when you release. Is this too simplistic a view on things? What repository layouts do you guys recommend? If it makes a difference, I'm using Subversion. | See these two questions on SO for more information: What does branch, tag and trunk really mean? Subversion question | Repository organisation When I first started using revision control systems like CVS and SVN, I didn't really understand the concepts of the "trunk", branching, merging and tagging. I'm now starting to understand these concepts, and really get the importance and power behind them. So, I'm starting to do it properly. Or so I think... This is what I understand so far: The latest release/stable version of your code should sit in /trunk/ while beta versions or bleeding edge versions sit inside the /branches/ directory as different directories for each beta release, and then merged into the trunk when you release. Is this too simplistic a view on things? What repository layouts do you guys recommend? If it makes a difference, I'm using Subversion. | TITLE:
Repository organisation
QUESTION:
When I first started using revision control systems like CVS and SVN, I didn't really understand the concepts of the "trunk", branching, merging and tagging. I'm now starting to understand these concepts, and really get the importance and power behind them. So, I'm starting to do it properly. Or so I think... This is what I understand so far: The latest release/stable version of your code should sit in /trunk/ while beta versions or bleeding edge versions sit inside the /branches/ directory as different directories for each beta release, and then merged into the trunk when you release. Is this too simplistic a view on things? What repository layouts do you guys recommend? If it makes a difference, I'm using Subversion.
ANSWER:
See these two questions on SO for more information: What does branch, tag and trunk really mean? Subversion question | [
"versioning"
] | 5 | 5 | 408 | 4 | 0 | 2008-08-20T11:03:55.377000 | 2008-08-20T11:06:07.540000 |
17,772 | 17,807 | Anyone know a quick way to get to custom attributes on an enum value? | This is probably best shown with an example. I have an enum with attributes: public enum MyEnum {
[CustomInfo("This is a custom attrib")] None = 0,
[CustomInfo("This is another attrib")] ValueA,
[CustomInfo("This has an extra flag", AllowSomething = true)] ValueB, } I want to get to those attributes from an instance: public CustomInfoAttribute GetInfo( MyEnum enumInput ) {
Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum )
//here is the problem, GetField takes a string // the.ToString() on enums is very slow FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() );
//get the attribute from the field return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ). FirstOrDefault() //Linq method to get first or null as CustomInfoAttribute; //use as operator to convert } As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it. Does anyone have a better way? | This is probably the easiest way. A quicker way would be to Statically Emit the IL code using Dynamic Method and ILGenerator. Although I've only used this to GetPropertyInfo, but can't see why you couldn't emit CustomAttributeInfo as well. For example code to emit a getter from a property public delegate object FastPropertyGetHandler(object target);
private static void EmitBoxIfNeeded(ILGenerator ilGenerator, System.Type type) { if (type.IsValueType) { ilGenerator.Emit(OpCodes.Box, type); } }
public static FastPropertyGetHandler GetPropertyGetter(PropertyInfo propInfo) { // generates a dynamic method to generate a FastPropertyGetHandler delegate DynamicMethod dynamicMethod = new DynamicMethod( string.Empty, typeof (object), new Type[] { typeof (object) }, propInfo.DeclaringType.Module);
ILGenerator ilGenerator = dynamicMethod.GetILGenerator(); // loads the object into the stack ilGenerator.Emit(OpCodes.Ldarg_0); // calls the getter ilGenerator.EmitCall(OpCodes.Callvirt, propInfo.GetGetMethod(), null); // creates code for handling the return value EmitBoxIfNeeded(ilGenerator, propInfo.PropertyType); // returns the value to the caller ilGenerator.Emit(OpCodes.Ret); // converts the DynamicMethod to a FastPropertyGetHandler delegate // to get the property FastPropertyGetHandler getter = (FastPropertyGetHandler) dynamicMethod.CreateDelegate(typeof(FastPropertyGetHandler));
return getter; } | Anyone know a quick way to get to custom attributes on an enum value? This is probably best shown with an example. I have an enum with attributes: public enum MyEnum {
[CustomInfo("This is a custom attrib")] None = 0,
[CustomInfo("This is another attrib")] ValueA,
[CustomInfo("This has an extra flag", AllowSomething = true)] ValueB, } I want to get to those attributes from an instance: public CustomInfoAttribute GetInfo( MyEnum enumInput ) {
Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum )
//here is the problem, GetField takes a string // the.ToString() on enums is very slow FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() );
//get the attribute from the field return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ). FirstOrDefault() //Linq method to get first or null as CustomInfoAttribute; //use as operator to convert } As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it. Does anyone have a better way? | TITLE:
Anyone know a quick way to get to custom attributes on an enum value?
QUESTION:
This is probably best shown with an example. I have an enum with attributes: public enum MyEnum {
[CustomInfo("This is a custom attrib")] None = 0,
[CustomInfo("This is another attrib")] ValueA,
[CustomInfo("This has an extra flag", AllowSomething = true)] ValueB, } I want to get to those attributes from an instance: public CustomInfoAttribute GetInfo( MyEnum enumInput ) {
Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum )
//here is the problem, GetField takes a string // the.ToString() on enums is very slow FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() );
//get the attribute from the field return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ). FirstOrDefault() //Linq method to get first or null as CustomInfoAttribute; //use as operator to convert } As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it. Does anyone have a better way?
ANSWER:
This is probably the easiest way. A quicker way would be to Statically Emit the IL code using Dynamic Method and ILGenerator. Although I've only used this to GetPropertyInfo, but can't see why you couldn't emit CustomAttributeInfo as well. For example code to emit a getter from a property public delegate object FastPropertyGetHandler(object target);
private static void EmitBoxIfNeeded(ILGenerator ilGenerator, System.Type type) { if (type.IsValueType) { ilGenerator.Emit(OpCodes.Box, type); } }
public static FastPropertyGetHandler GetPropertyGetter(PropertyInfo propInfo) { // generates a dynamic method to generate a FastPropertyGetHandler delegate DynamicMethod dynamicMethod = new DynamicMethod( string.Empty, typeof (object), new Type[] { typeof (object) }, propInfo.DeclaringType.Module);
ILGenerator ilGenerator = dynamicMethod.GetILGenerator(); // loads the object into the stack ilGenerator.Emit(OpCodes.Ldarg_0); // calls the getter ilGenerator.EmitCall(OpCodes.Callvirt, propInfo.GetGetMethod(), null); // creates code for handling the return value EmitBoxIfNeeded(ilGenerator, propInfo.PropertyType); // returns the value to the caller ilGenerator.Emit(OpCodes.Ret); // converts the DynamicMethod to a FastPropertyGetHandler delegate // to get the property FastPropertyGetHandler getter = (FastPropertyGetHandler) dynamicMethod.CreateDelegate(typeof(FastPropertyGetHandler));
return getter; } | [
"c#",
".net",
"reflection",
"enums",
"attributes"
] | 21 | 11 | 11,233 | 2 | 0 | 2008-08-20T11:34:06.753000 | 2008-08-20T12:01:50.027000 |
17,785 | 17,809 | Default Internet connection on Dual LAN Workstation | I know this is not programming directly, but it's regarding a development workstation I'm setting up. I've got a Windows Server 2003 machine that needs to be on two LAN segments at the same time. One of them is a 10.17.x.x LAN and the other is 10.16.x.x The problem is that I don't want to be using up the bandwidth on the 10.16.x.x network for internet traffic, etc (this network is basically only for internal stuff, though it does have internet access) so I would like the system to use the 10.17.x.x connection for anything that is external to the LAN (and for anything on 10.17.x.x of course, and to only use the 10.16.x.x connection for things that are on that specific LAN. I've tried looking into the windows "route" command but it's fairly confusing and won't seem to let me delete routes tha tI believe are interfering with what I want it to do. Is there a better way of doing this? Any good software for segmenting your LAN access? | I'm no network expert but I have fiddled with the route command a number of times... route add 0.0.0.0 MASK 0.0.0.0 Will route all default traffic through the 10.17.x.x gateway, if you find that it still routes through the other interface, you should make sure that the new rule has a lower metric than the existing routes. Do this by adding METRIC 1 for example to the end of the line above. You could also adjust the metric in the Advanced TCP/IP Settings window of the 10.17.x.x interface, unticking the Automatic Metric checkbox and setting the value to something low, like 1 or 2. | Default Internet connection on Dual LAN Workstation I know this is not programming directly, but it's regarding a development workstation I'm setting up. I've got a Windows Server 2003 machine that needs to be on two LAN segments at the same time. One of them is a 10.17.x.x LAN and the other is 10.16.x.x The problem is that I don't want to be using up the bandwidth on the 10.16.x.x network for internet traffic, etc (this network is basically only for internal stuff, though it does have internet access) so I would like the system to use the 10.17.x.x connection for anything that is external to the LAN (and for anything on 10.17.x.x of course, and to only use the 10.16.x.x connection for things that are on that specific LAN. I've tried looking into the windows "route" command but it's fairly confusing and won't seem to let me delete routes tha tI believe are interfering with what I want it to do. Is there a better way of doing this? Any good software for segmenting your LAN access? | TITLE:
Default Internet connection on Dual LAN Workstation
QUESTION:
I know this is not programming directly, but it's regarding a development workstation I'm setting up. I've got a Windows Server 2003 machine that needs to be on two LAN segments at the same time. One of them is a 10.17.x.x LAN and the other is 10.16.x.x The problem is that I don't want to be using up the bandwidth on the 10.16.x.x network for internet traffic, etc (this network is basically only for internal stuff, though it does have internet access) so I would like the system to use the 10.17.x.x connection for anything that is external to the LAN (and for anything on 10.17.x.x of course, and to only use the 10.16.x.x connection for things that are on that specific LAN. I've tried looking into the windows "route" command but it's fairly confusing and won't seem to let me delete routes tha tI believe are interfering with what I want it to do. Is there a better way of doing this? Any good software for segmenting your LAN access?
ANSWER:
I'm no network expert but I have fiddled with the route command a number of times... route add 0.0.0.0 MASK 0.0.0.0 Will route all default traffic through the 10.17.x.x gateway, if you find that it still routes through the other interface, you should make sure that the new rule has a lower metric than the existing routes. Do this by adding METRIC 1 for example to the end of the line above. You could also adjust the metric in the Advanced TCP/IP Settings window of the 10.17.x.x interface, unticking the Automatic Metric checkbox and setting the value to something low, like 1 or 2. | [
"networking",
"windows-server-2003"
] | 0 | 2 | 2,260 | 2 | 0 | 2008-08-20T11:43:21.833000 | 2008-08-20T12:03:22.770000 |
17,786 | 17,793 | Warning C4341 - 'XX': signed value is out of range for enum constant | When compiling my C++.Net application I get 104 warnings of the type: Warning C4341 - 'XX': signed value is out of range for enum constant Where XX can be WCHAR LONG BIT BINARY GUID... I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings. Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings! | This is a compiler bug. Here's another post confirming it's a known issue. I've got the same issue in one of my projects and there's no way to prevent it from being triggered unless you have some way of avoiding the use of OdbcParameter. The most conservative way to suppress only the buggy warnings is to use #pragma warning( push ) #pragma warning( disable: 4341 )
// code affected by bug
#pragma warning( pop ) | Warning C4341 - 'XX': signed value is out of range for enum constant When compiling my C++.Net application I get 104 warnings of the type: Warning C4341 - 'XX': signed value is out of range for enum constant Where XX can be WCHAR LONG BIT BINARY GUID... I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings. Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings! | TITLE:
Warning C4341 - 'XX': signed value is out of range for enum constant
QUESTION:
When compiling my C++.Net application I get 104 warnings of the type: Warning C4341 - 'XX': signed value is out of range for enum constant Where XX can be WCHAR LONG BIT BINARY GUID... I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings. Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings!
ANSWER:
This is a compiler bug. Here's another post confirming it's a known issue. I've got the same issue in one of my projects and there's no way to prevent it from being triggered unless you have some way of avoiding the use of OdbcParameter. The most conservative way to suppress only the buggy warnings is to use #pragma warning( push ) #pragma warning( disable: 4341 )
// code affected by bug
#pragma warning( pop ) | [
".net",
"c++",
"visual-c++"
] | 6 | 4 | 1,242 | 3 | 0 | 2008-08-20T11:43:49.617000 | 2008-08-20T11:51:54.950000 |
17,795 | 17,801 | Getting the Remote Name Address (not IP) | I wanted to show the users Name Address (see www.ipchicken.com ), but the only thing I can find is the IP Address. I tried a reverse lookup, but didn't work either: IPAddress ip = IPAddress.Parse(this.lblIp.Text); string hostName = Dns.GetHostByAddress(ip).HostName; this.lblHost.Text = hostName; But HostName is the same as the IP address. Who know's what I need to do? Thanks. Gab. | Edit of my previous answer. Try (in vb.net): Dim sTmp As String Dim ip As IPHostEntry
sTmp = MaskedTextBox1.Text Dim ipAddr As IPAddress = IPAddress.Parse(sTmp) ip = Dns.GetHostEntry(ipAddr) MaskedTextBox2.Text = ip.HostName Dns.resolve appears to be obsolete in later versions of.Net. As stated here before I believe the issue is caused by your IP address not having a fixed name or by it having multiple names. The example above works with Google addresses, but not with an address we use that has a couple of names associated with it. | Getting the Remote Name Address (not IP) I wanted to show the users Name Address (see www.ipchicken.com ), but the only thing I can find is the IP Address. I tried a reverse lookup, but didn't work either: IPAddress ip = IPAddress.Parse(this.lblIp.Text); string hostName = Dns.GetHostByAddress(ip).HostName; this.lblHost.Text = hostName; But HostName is the same as the IP address. Who know's what I need to do? Thanks. Gab. | TITLE:
Getting the Remote Name Address (not IP)
QUESTION:
I wanted to show the users Name Address (see www.ipchicken.com ), but the only thing I can find is the IP Address. I tried a reverse lookup, but didn't work either: IPAddress ip = IPAddress.Parse(this.lblIp.Text); string hostName = Dns.GetHostByAddress(ip).HostName; this.lblHost.Text = hostName; But HostName is the same as the IP address. Who know's what I need to do? Thanks. Gab.
ANSWER:
Edit of my previous answer. Try (in vb.net): Dim sTmp As String Dim ip As IPHostEntry
sTmp = MaskedTextBox1.Text Dim ipAddr As IPAddress = IPAddress.Parse(sTmp) ip = Dns.GetHostEntry(ipAddr) MaskedTextBox2.Text = ip.HostName Dns.resolve appears to be obsolete in later versions of.Net. As stated here before I believe the issue is caused by your IP address not having a fixed name or by it having multiple names. The example above works with Google addresses, but not with an address we use that has a couple of names associated with it. | [
".net",
"asp.net"
] | 5 | 3 | 1,336 | 6 | 0 | 2008-08-20T11:52:59.610000 | 2008-08-20T11:58:22.983000 |
17,806 | 2,137,718 | Warning: Found conflicts between different versions of the same dependent assembly | I am currently developing a.NET application, which consists of 20 projects. Some of those projects are compiled using.NET 3.5, some others are still.NET 2.0 projects (so far no problem). The problem is that if I include an external component I always get the following warning: Found conflicts between different versions of the same dependent assembly. What exactly does this warning mean and is there maybe a possibility to exclude this warning (like using #pragma disable in the source code files)? | This warning means that two projects reference the same assembly (e.g. System.Windows.Forms ) but the two projects require different versions. You have a few options: Recompile all projects to use the same versions (e.g. move all to.Net 3.5). This is the preferred option because all code is running with the versions of dependencies they were compiled with. Add a binding redirect. This will suppress the warning. However, your.Net 2.0 projects will (at runtime) be bound to the.Net 3.5 versions of dependent assemblies such as System.Windows.Forms. You can quickly add a binding redirect by double-clicking on error in Visual Studio. Use CopyLocal=true. I'm not sure if this will suppress the warning. It will, like option 2 above, mean that all projects will use the.Net 3.5 version of System.Windows.Forms. Here are a couple of ways to identify the offending reference(s): You can use a utility such as the one found at https://gist.github.com/1553265 Another simple method is to set Build output verbosity (Tools, Options, Projects and Solutions, Build and Run, MSBuild project build output verbosity, Detailed) and after building, search the output window for the warning, and look at the text just above it. (Hat tip to pauloya who suggested this in the comments on this answer). | Warning: Found conflicts between different versions of the same dependent assembly I am currently developing a.NET application, which consists of 20 projects. Some of those projects are compiled using.NET 3.5, some others are still.NET 2.0 projects (so far no problem). The problem is that if I include an external component I always get the following warning: Found conflicts between different versions of the same dependent assembly. What exactly does this warning mean and is there maybe a possibility to exclude this warning (like using #pragma disable in the source code files)? | TITLE:
Warning: Found conflicts between different versions of the same dependent assembly
QUESTION:
I am currently developing a.NET application, which consists of 20 projects. Some of those projects are compiled using.NET 3.5, some others are still.NET 2.0 projects (so far no problem). The problem is that if I include an external component I always get the following warning: Found conflicts between different versions of the same dependent assembly. What exactly does this warning mean and is there maybe a possibility to exclude this warning (like using #pragma disable in the source code files)?
ANSWER:
This warning means that two projects reference the same assembly (e.g. System.Windows.Forms ) but the two projects require different versions. You have a few options: Recompile all projects to use the same versions (e.g. move all to.Net 3.5). This is the preferred option because all code is running with the versions of dependencies they were compiled with. Add a binding redirect. This will suppress the warning. However, your.Net 2.0 projects will (at runtime) be bound to the.Net 3.5 versions of dependent assemblies such as System.Windows.Forms. You can quickly add a binding redirect by double-clicking on error in Visual Studio. Use CopyLocal=true. I'm not sure if this will suppress the warning. It will, like option 2 above, mean that all projects will use the.Net 3.5 version of System.Windows.Forms. Here are a couple of ways to identify the offending reference(s): You can use a utility such as the one found at https://gist.github.com/1553265 Another simple method is to set Build output verbosity (Tools, Options, Projects and Solutions, Build and Run, MSBuild project build output verbosity, Detailed) and after building, search the output window for the warning, and look at the text just above it. (Hat tip to pauloya who suggested this in the comments on this answer). | [
".net",
"warnings"
] | 351 | 441 | 219,211 | 21 | 0 | 2008-08-20T12:00:59.687000 | 2010-01-26T05:54:29.003000 |
17,817 | 17,822 | Recommendation for javascript form validation library | Any recommendations for a javascript form validation library. I could try and roll my own (but I'm not very good at javascript). Needs to support checking for required fields, and preferably regexp validation of fields. | I've used this library for a couple of personal projects. It's pretty good, though I have had to make my own modifications to it a couple of times - nothing major, though, and it's easy enough to do so. I'm sure you already do this, but also validate all of your information on the server-side, as well. Client-side-only validation is rarely, if ever, a good idea. | Recommendation for javascript form validation library Any recommendations for a javascript form validation library. I could try and roll my own (but I'm not very good at javascript). Needs to support checking for required fields, and preferably regexp validation of fields. | TITLE:
Recommendation for javascript form validation library
QUESTION:
Any recommendations for a javascript form validation library. I could try and roll my own (but I'm not very good at javascript). Needs to support checking for required fields, and preferably regexp validation of fields.
ANSWER:
I've used this library for a couple of personal projects. It's pretty good, though I have had to make my own modifications to it a couple of times - nothing major, though, and it's easy enough to do so. I'm sure you already do this, but also validate all of your information on the server-side, as well. Client-side-only validation is rarely, if ever, a good idea. | [
"javascript",
"forms",
"validation"
] | 14 | 1 | 8,809 | 3 | 0 | 2008-08-20T12:12:13.073000 | 2008-08-20T12:18:52.060000 |
17,825 | 17,833 | best practice for releasing Microsoft dll's in setup | I'm working on a setup which wants to include the Microsoft.Web.Services3 (WSE 3.0) DLL. However, I typically do not like including Microsoft DLL's in our installs except by way of Microsoft's redistributables. There is both a developer and a redist install package available from Microsoft. So, as a best practice, should I include the single DLL in my install or refer them to one of the WSE 3.0 installs (assuming they do not already have it installed)? | Usually, redistributing any of Microsoft DLLs outside of the redistributable package is forbidden by their EULA, so you might first want to check the appropriate EULA for that DLL. Generally, I would prefer the redist package since that makes sure that it's correctly "registered" into the system, i.e. if you install a newer version of the redist it gets updated (like DirectX) or not overwritten if it's an older version (also like DirectX). | best practice for releasing Microsoft dll's in setup I'm working on a setup which wants to include the Microsoft.Web.Services3 (WSE 3.0) DLL. However, I typically do not like including Microsoft DLL's in our installs except by way of Microsoft's redistributables. There is both a developer and a redist install package available from Microsoft. So, as a best practice, should I include the single DLL in my install or refer them to one of the WSE 3.0 installs (assuming they do not already have it installed)? | TITLE:
best practice for releasing Microsoft dll's in setup
QUESTION:
I'm working on a setup which wants to include the Microsoft.Web.Services3 (WSE 3.0) DLL. However, I typically do not like including Microsoft DLL's in our installs except by way of Microsoft's redistributables. There is both a developer and a redist install package available from Microsoft. So, as a best practice, should I include the single DLL in my install or refer them to one of the WSE 3.0 installs (assuming they do not already have it installed)?
ANSWER:
Usually, redistributing any of Microsoft DLLs outside of the redistributable package is forbidden by their EULA, so you might first want to check the appropriate EULA for that DLL. Generally, I would prefer the redist package since that makes sure that it's correctly "registered" into the system, i.e. if you install a newer version of the redist it gets updated (like DirectX) or not overwritten if it's an older version (also like DirectX). | [
"windows-installer",
"installation",
"redistributable"
] | 2 | 5 | 995 | 5 | 0 | 2008-08-20T12:22:41.083000 | 2008-08-20T12:28:10.867000 |
17,840 | 27,699 | How can I learn about parser combinators? | I've found a few resources on the subject, but they all require a deep understanding of SmallTalk or Haskell, neither of which I know. | Here are some parser combinator libraries in more mainstream languages: Spirit (C++) Jparsec (Java) | How can I learn about parser combinators? I've found a few resources on the subject, but they all require a deep understanding of SmallTalk or Haskell, neither of which I know. | TITLE:
How can I learn about parser combinators?
QUESTION:
I've found a few resources on the subject, but they all require a deep understanding of SmallTalk or Haskell, neither of which I know.
ANSWER:
Here are some parser combinator libraries in more mainstream languages: Spirit (C++) Jparsec (Java) | [
"parsing",
"monads"
] | 18 | 5 | 7,488 | 10 | 0 | 2008-08-20T12:38:49.830000 | 2008-08-26T10:23:39.277000 |
17,845 | 41,264 | Performance comparison of RDF storage vs traditional database | Has someone experiment RDF storage solution like Sesame? I'm looking for performance review of this kind of solution compared to the traditional database solution. | There are plenny scalabity reports and benchmarks on the web about various triple-stores. Here is a fine scalability report. W3C itself maintain a wiki with lots of information about Large Triplestores and Benchmarks. Follow these 3 links and take a time to read it. It's very informative.:) | Performance comparison of RDF storage vs traditional database Has someone experiment RDF storage solution like Sesame? I'm looking for performance review of this kind of solution compared to the traditional database solution. | TITLE:
Performance comparison of RDF storage vs traditional database
QUESTION:
Has someone experiment RDF storage solution like Sesame? I'm looking for performance review of this kind of solution compared to the traditional database solution.
ANSWER:
There are plenny scalabity reports and benchmarks on the web about various triple-stores. Here is a fine scalability report. W3C itself maintain a wiki with lots of information about Large Triplestores and Benchmarks. Follow these 3 links and take a time to read it. It's very informative.:) | [
"database",
"performance",
"rdf",
"sesame",
"openrdf"
] | 12 | 10 | 2,655 | 3 | 0 | 2008-08-20T12:42:49.850000 | 2008-09-03T07:08:51.223000 |
17,870 | 17,872 | Select ..... where .... OR | Is there a way to select data where any one of multiple conditions occur on the same field? Example: I would typically write a statement such as: select * from TABLE where field = 1 or field = 2 or field = 3 Is there a way to instead say something like: select * from TABLE where field = 1 || 2 || 3 Any help is appreciated. | Sure thing, the simplest way is this: select foo from bar where baz in (1,2,3) | Select ..... where .... OR Is there a way to select data where any one of multiple conditions occur on the same field? Example: I would typically write a statement such as: select * from TABLE where field = 1 or field = 2 or field = 3 Is there a way to instead say something like: select * from TABLE where field = 1 || 2 || 3 Any help is appreciated. | TITLE:
Select ..... where .... OR
QUESTION:
Is there a way to select data where any one of multiple conditions occur on the same field? Example: I would typically write a statement such as: select * from TABLE where field = 1 or field = 2 or field = 3 Is there a way to instead say something like: select * from TABLE where field = 1 || 2 || 3 Any help is appreciated.
ANSWER:
Sure thing, the simplest way is this: select foo from bar where baz in (1,2,3) | [
"mysql"
] | 19 | 35 | 43,549 | 7 | 0 | 2008-08-20T12:58:51.513000 | 2008-08-20T12:59:51.743000 |
17,877 | 17,923 | How to encrypt connection string in WinForms 1.1 app.config? | Just looking for the first step basic solution here that keeps the honest people out. Thanks, Mike | This might help you along the way: http://msdn.microsoft.com/en-us/library/aa302403.aspx http://msdn.microsoft.com/en-us/library/aa302406.aspx The articles are aimed at ASP.NET but the principles are the same. | How to encrypt connection string in WinForms 1.1 app.config? Just looking for the first step basic solution here that keeps the honest people out. Thanks, Mike | TITLE:
How to encrypt connection string in WinForms 1.1 app.config?
QUESTION:
Just looking for the first step basic solution here that keeps the honest people out. Thanks, Mike
ANSWER:
This might help you along the way: http://msdn.microsoft.com/en-us/library/aa302403.aspx http://msdn.microsoft.com/en-us/library/aa302406.aspx The articles are aimed at ASP.NET but the principles are the same. | [
"database",
"winforms"
] | 2 | 0 | 1,864 | 2 | 0 | 2008-08-20T13:00:27.360000 | 2008-08-20T13:19:47.220000 |
17,878 | 17,977 | Listen for events in another application | Suppose I have two applications written in C#. The first is a third party application that raises an event called "OnEmailSent". The second is a custom app that I've written that I would like to somehow subscribe to the "OnEmailSent" even of the first application. Is there any way that I could somehow attach the second application to an instance of the first application to listen for "OnEmailSent" event? So for further clarification, my specific scenario is that we have a custom third party application written in c# that raises an "OnEmailSent" event. We can see the event exists using reflector. What we want to do is have some other actions take place when this component sends an email. The most efficient way we can think of would be to be able to use some form of IPC as anders has suggested and listen for the OnEmailSent event being raised by the third party component. Because the component is written in C# we are toying with the idea of writing another C# application that can attach itself to the executing process and when it detect the OnEmailSent event has been raise it will execute it's own event handling code. I might be missing something, but from what I understand of how remoting works is that there would need to be a server defining some sort of contract that the client can subscribe to. I was more thinking about a scenario where someone has written a standalone application like outlook for example, that exposes events that I would like to subscribe to from another application. I guess the scenario I'm thinking of is the.net debugger and how it can attach to executing assemblies to inspect the code whilst it's running. | In order for two applications (separate processes) to exchange events, they must agree on how these events are communicated. There are many different ways of doing this, and exactly which method to use may depend on architecture and context. The general term for this kind of information exchange between processes is Inter-process Communication (IPC). There exists many standard ways of doing IPC, the most common being files, pipes, (network) sockets, remote procedure calls (RPC) and shared memory. On Windows it's also common to use window messages. I am not sure how this works for.NET/C# applications on Windows, but in native Win32 applications you can hook on to the message loop of external processes and "spy" on the messages they are sending. If your program generates a message event when the desired function is called, this could be a way to detect it. If you are implementing both applications yourself you can chose to use any IPC method you prefer. Network sockets and higher-level socket-based protocols like HTTP, XML-RPC and SOAP are very popular these days, as they allow you do run the applications on different physical machines as well (given that they are connected via a network). | Listen for events in another application Suppose I have two applications written in C#. The first is a third party application that raises an event called "OnEmailSent". The second is a custom app that I've written that I would like to somehow subscribe to the "OnEmailSent" even of the first application. Is there any way that I could somehow attach the second application to an instance of the first application to listen for "OnEmailSent" event? So for further clarification, my specific scenario is that we have a custom third party application written in c# that raises an "OnEmailSent" event. We can see the event exists using reflector. What we want to do is have some other actions take place when this component sends an email. The most efficient way we can think of would be to be able to use some form of IPC as anders has suggested and listen for the OnEmailSent event being raised by the third party component. Because the component is written in C# we are toying with the idea of writing another C# application that can attach itself to the executing process and when it detect the OnEmailSent event has been raise it will execute it's own event handling code. I might be missing something, but from what I understand of how remoting works is that there would need to be a server defining some sort of contract that the client can subscribe to. I was more thinking about a scenario where someone has written a standalone application like outlook for example, that exposes events that I would like to subscribe to from another application. I guess the scenario I'm thinking of is the.net debugger and how it can attach to executing assemblies to inspect the code whilst it's running. | TITLE:
Listen for events in another application
QUESTION:
Suppose I have two applications written in C#. The first is a third party application that raises an event called "OnEmailSent". The second is a custom app that I've written that I would like to somehow subscribe to the "OnEmailSent" even of the first application. Is there any way that I could somehow attach the second application to an instance of the first application to listen for "OnEmailSent" event? So for further clarification, my specific scenario is that we have a custom third party application written in c# that raises an "OnEmailSent" event. We can see the event exists using reflector. What we want to do is have some other actions take place when this component sends an email. The most efficient way we can think of would be to be able to use some form of IPC as anders has suggested and listen for the OnEmailSent event being raised by the third party component. Because the component is written in C# we are toying with the idea of writing another C# application that can attach itself to the executing process and when it detect the OnEmailSent event has been raise it will execute it's own event handling code. I might be missing something, but from what I understand of how remoting works is that there would need to be a server defining some sort of contract that the client can subscribe to. I was more thinking about a scenario where someone has written a standalone application like outlook for example, that exposes events that I would like to subscribe to from another application. I guess the scenario I'm thinking of is the.net debugger and how it can attach to executing assemblies to inspect the code whilst it's running.
ANSWER:
In order for two applications (separate processes) to exchange events, they must agree on how these events are communicated. There are many different ways of doing this, and exactly which method to use may depend on architecture and context. The general term for this kind of information exchange between processes is Inter-process Communication (IPC). There exists many standard ways of doing IPC, the most common being files, pipes, (network) sockets, remote procedure calls (RPC) and shared memory. On Windows it's also common to use window messages. I am not sure how this works for.NET/C# applications on Windows, but in native Win32 applications you can hook on to the message loop of external processes and "spy" on the messages they are sending. If your program generates a message event when the desired function is called, this could be a way to detect it. If you are implementing both applications yourself you can chose to use any IPC method you prefer. Network sockets and higher-level socket-based protocols like HTTP, XML-RPC and SOAP are very popular these days, as they allow you do run the applications on different physical machines as well (given that they are connected via a network). | [
"c#",
"events",
"delegates"
] | 24 | 14 | 33,175 | 6 | 0 | 2008-08-20T13:01:01.100000 | 2008-08-20T13:44:10.607000 |
17,880 | 19,571 | Mac iWork/Pages Automation | There is a rich scripting model for Microsoft Office, but not so with Apple iWork, and specifically the word processor Pages. While there are some AppleScript hooks, it looks like the best approach is to manipulate the underlying XML data. This turns out to be pretty ugly because (for example) page breaks are stored in XML. So for example, you have something like:... we hold these truths to be self evident, that all men are created equal, and are... So if you want to add or remove text, you have to move the start/end tags around based on the size of the text on the page. This is pretty impossible without computing the number of words a page can hold, which seems wildly inelegant. Anybody have any thoughts on this? | I'd suggest that modifying the underlying XML file is "considered harmful". Especially if you haven't checked to see if the document is open! I've had a quick look at the Scripting Dictionary for Pages, and it seems pretty comprehensive; here is part of one entry: document n [inh. document > item; see also Standard Suite]: A Pages document. elements contains captured pages, character styles, charts, graphics, images, lines, list styles, pages, paragraph styles, sections, shapes, tables, text boxes. properties body text (text): The main text flow of the document. bottom margin (real): The bottom margin of the publication. facing pages (boolean): Whether or not the view is set to facing pages. footer margin (real): The footer margin of the publication. header margin (real): The header margin of the publication. id (integer, r/o): The unique identifier of the document.... So, I guess I'd want to know what it is that you want to do that you can't do with AppleScript? | Mac iWork/Pages Automation There is a rich scripting model for Microsoft Office, but not so with Apple iWork, and specifically the word processor Pages. While there are some AppleScript hooks, it looks like the best approach is to manipulate the underlying XML data. This turns out to be pretty ugly because (for example) page breaks are stored in XML. So for example, you have something like:... we hold these truths to be self evident, that all men are created equal, and are... So if you want to add or remove text, you have to move the start/end tags around based on the size of the text on the page. This is pretty impossible without computing the number of words a page can hold, which seems wildly inelegant. Anybody have any thoughts on this? | TITLE:
Mac iWork/Pages Automation
QUESTION:
There is a rich scripting model for Microsoft Office, but not so with Apple iWork, and specifically the word processor Pages. While there are some AppleScript hooks, it looks like the best approach is to manipulate the underlying XML data. This turns out to be pretty ugly because (for example) page breaks are stored in XML. So for example, you have something like:... we hold these truths to be self evident, that all men are created equal, and are... So if you want to add or remove text, you have to move the start/end tags around based on the size of the text on the page. This is pretty impossible without computing the number of words a page can hold, which seems wildly inelegant. Anybody have any thoughts on this?
ANSWER:
I'd suggest that modifying the underlying XML file is "considered harmful". Especially if you haven't checked to see if the document is open! I've had a quick look at the Scripting Dictionary for Pages, and it seems pretty comprehensive; here is part of one entry: document n [inh. document > item; see also Standard Suite]: A Pages document. elements contains captured pages, character styles, charts, graphics, images, lines, list styles, pages, paragraph styles, sections, shapes, tables, text boxes. properties body text (text): The main text flow of the document. bottom margin (real): The bottom margin of the publication. facing pages (boolean): Whether or not the view is set to facing pages. footer margin (real): The footer margin of the publication. header margin (real): The header margin of the publication. id (integer, r/o): The unique identifier of the document.... So, I guess I'd want to know what it is that you want to do that you can't do with AppleScript? | [
"xml",
"automation",
"applescript",
"iwork"
] | 2 | 1 | 2,316 | 2 | 0 | 2008-08-20T13:02:38.070000 | 2008-08-21T10:08:56.200000 |
17,893 | 17,933 | What's the best way to distribute python command-line tools? | My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar.. Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications? | Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want. To reproduce here: from setuptools import setup
setup( # other arguments here... entry_points = { 'console_scripts': [ 'foo = package.module:func', 'bar = othermodule:somefunc', ], } ) | What's the best way to distribute python command-line tools? My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar.. Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications? | TITLE:
What's the best way to distribute python command-line tools?
QUESTION:
My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar.. Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
ANSWER:
Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want. To reproduce here: from setuptools import setup
setup( # other arguments here... entry_points = { 'console_scripts': [ 'foo = package.module:func', 'bar = othermodule:somefunc', ], } ) | [
"python",
"command-line",
"packaging"
] | 43 | 38 | 6,835 | 1 | 0 | 2008-08-20T13:07:25.517000 | 2008-08-20T13:25:23.010000 |
17,922 | 3,672,519 | Any recommendations for lightweight .net Win Forms HTML renderer controls? | Trying to avoid the.net WebBrowser control (I don't need to navigate to a url, print rendered html or any of the other inbuilt goodies). Wrapping the IE dll seems a bit heavyweight. I simply require something that can display basic html marked up text - an html equivalent of RichTextBox in effect. Anyone have any experiences / recommendations / war stories? | I developed this HTML control for.NET, which does what you were asking: i.e. display basic html marked up text. It doesn't use IE or any other unmanaged code (except for the.NET framework itself). | Any recommendations for lightweight .net Win Forms HTML renderer controls? Trying to avoid the.net WebBrowser control (I don't need to navigate to a url, print rendered html or any of the other inbuilt goodies). Wrapping the IE dll seems a bit heavyweight. I simply require something that can display basic html marked up text - an html equivalent of RichTextBox in effect. Anyone have any experiences / recommendations / war stories? | TITLE:
Any recommendations for lightweight .net Win Forms HTML renderer controls?
QUESTION:
Trying to avoid the.net WebBrowser control (I don't need to navigate to a url, print rendered html or any of the other inbuilt goodies). Wrapping the IE dll seems a bit heavyweight. I simply require something that can display basic html marked up text - an html equivalent of RichTextBox in effect. Anyone have any experiences / recommendations / war stories?
ANSWER:
I developed this HTML control for.NET, which does what you were asking: i.e. display basic html marked up text. It doesn't use IE or any other unmanaged code (except for the.NET framework itself). | [
".net",
"winforms",
"user-interface",
"controls"
] | 5 | 7 | 2,708 | 5 | 0 | 2008-08-20T13:19:34.927000 | 2010-09-08T22:54:51.247000 |
17,928 | 21,773 | Using an ocx in a console application | I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete. | Sure..it's pretty easy. Here's a fun app I threw together. I'm assuming you have Visual C++. Save to test.cpp and compile: cl.exe /EHsc test.cpp To test with your OCX you'll need to either #import the typelib and use it's CLSID (or just hard-code the CLSID) in the CoCreateInstance call. Using #import will also help define any custom interfaces you might need. #include "windows.h" #include "shobjidl.h" #include "atlbase.h"
// // compile with: cl /EHsc test.cpp //
// A fun little program to demonstrate creating an OCX. // (CLSID_TaskbarList in this case) //
BOOL CALLBACK RemoveFromTaskbarProc( HWND hwnd, LPARAM lParam ) { ITaskbarList* ptbl = (ITaskbarList*)lParam; ptbl->DeleteTab(hwnd); return TRUE; }
void HideTaskWindows(ITaskbarList* ptbl) { EnumWindows( RemoveFromTaskbarProc, (LPARAM) ptbl); }
// ============
BOOL CALLBACK AddToTaskbarProc( HWND hwnd, LPARAM lParam ) { ITaskbarList* ptbl = (ITaskbarList*)lParam; ptbl->AddTab(hwnd);
return TRUE;// continue enumerating }
void ShowTaskWindows(ITaskbarList* ptbl) { if (!EnumWindows( AddToTaskbarProc, (LPARAM) ptbl)) throw "Unable to enum windows in ShowTaskWindows"; }
// ============
int main(int, char**) { CoInitialize(0);
try { CComPtr pUnk;
if (FAILED(CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER|CLSCTX_LOCAL_SERVER, IID_IUnknown, (void**) &pUnk))) throw "Unabled to create CLSID_TaskbarList";
// Do something with the object...
CComQIPtr ptbl = pUnk; if (ptbl) ptbl->HrInit();
HideTaskWindows(ptbl); MessageBox( GetDesktopWindow(), _T("Check out the task bar!"), _T("StackOverflow FTW"), MB_OK); ShowTaskWindows(ptbl); } catch( TCHAR * msg ) { MessageBox( GetDesktopWindow(), msg, _T("Error"), MB_OK); }
CoUninitialize();
return 0; } | Using an ocx in a console application I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete. | TITLE:
Using an ocx in a console application
QUESTION:
I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete.
ANSWER:
Sure..it's pretty easy. Here's a fun app I threw together. I'm assuming you have Visual C++. Save to test.cpp and compile: cl.exe /EHsc test.cpp To test with your OCX you'll need to either #import the typelib and use it's CLSID (or just hard-code the CLSID) in the CoCreateInstance call. Using #import will also help define any custom interfaces you might need. #include "windows.h" #include "shobjidl.h" #include "atlbase.h"
// // compile with: cl /EHsc test.cpp //
// A fun little program to demonstrate creating an OCX. // (CLSID_TaskbarList in this case) //
BOOL CALLBACK RemoveFromTaskbarProc( HWND hwnd, LPARAM lParam ) { ITaskbarList* ptbl = (ITaskbarList*)lParam; ptbl->DeleteTab(hwnd); return TRUE; }
void HideTaskWindows(ITaskbarList* ptbl) { EnumWindows( RemoveFromTaskbarProc, (LPARAM) ptbl); }
// ============
BOOL CALLBACK AddToTaskbarProc( HWND hwnd, LPARAM lParam ) { ITaskbarList* ptbl = (ITaskbarList*)lParam; ptbl->AddTab(hwnd);
return TRUE;// continue enumerating }
void ShowTaskWindows(ITaskbarList* ptbl) { if (!EnumWindows( AddToTaskbarProc, (LPARAM) ptbl)) throw "Unable to enum windows in ShowTaskWindows"; }
// ============
int main(int, char**) { CoInitialize(0);
try { CComPtr pUnk;
if (FAILED(CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER|CLSCTX_LOCAL_SERVER, IID_IUnknown, (void**) &pUnk))) throw "Unabled to create CLSID_TaskbarList";
// Do something with the object...
CComQIPtr ptbl = pUnk; if (ptbl) ptbl->HrInit();
HideTaskWindows(ptbl); MessageBox( GetDesktopWindow(), _T("Check out the task bar!"), _T("StackOverflow FTW"), MB_OK); ShowTaskWindows(ptbl); } catch( TCHAR * msg ) { MessageBox( GetDesktopWindow(), msg, _T("Error"), MB_OK); }
CoUninitialize();
return 0; } | [
"c++",
"visual-c++",
"console",
"activex"
] | 3 | 2 | 8,407 | 3 | 0 | 2008-08-20T13:22:56.320000 | 2008-08-22T04:31:00.810000 |
17,939 | 92,215 | When do you use sIFR? | I heard Joel and Jeff talking about sIFR in one of the early podcasts. I've been using it on www.american-data.com and www.chartright.us with some fairly mixed results. Yesterday I was informed that the first line of text on my website appeared upside down in Internet Explorer 6 without flash player. I'm pretty sure that assessment was wrong, owing to no flash player = no sIFR. But I'm getting some odd behavior on my pages, at least in IE 6, 7 and 8. I only really wanted to use sIFR because my fonts looked crummy on my computer in Firefox. My question is: if you use sIFR, when do you use sIFR? In which cases do you disable sIFR? When is it better to just use the browser font? | You use sIFR moderately, say for headlines. Try not to use it for links, because links in Flash don't work as well as normal HTML links. It also makes little sense to use sIFR only for text that never changes, an image would work a lot better. I haven't heard about the upside-down problem in a few years now, but in any case, that's an issue with IE 6 and (an old?) Flash player. In any case, it always makes sense to test thoroughly. Also, did you look into sIFR 3 lately? It's much improved over v2. | When do you use sIFR? I heard Joel and Jeff talking about sIFR in one of the early podcasts. I've been using it on www.american-data.com and www.chartright.us with some fairly mixed results. Yesterday I was informed that the first line of text on my website appeared upside down in Internet Explorer 6 without flash player. I'm pretty sure that assessment was wrong, owing to no flash player = no sIFR. But I'm getting some odd behavior on my pages, at least in IE 6, 7 and 8. I only really wanted to use sIFR because my fonts looked crummy on my computer in Firefox. My question is: if you use sIFR, when do you use sIFR? In which cases do you disable sIFR? When is it better to just use the browser font? | TITLE:
When do you use sIFR?
QUESTION:
I heard Joel and Jeff talking about sIFR in one of the early podcasts. I've been using it on www.american-data.com and www.chartright.us with some fairly mixed results. Yesterday I was informed that the first line of text on my website appeared upside down in Internet Explorer 6 without flash player. I'm pretty sure that assessment was wrong, owing to no flash player = no sIFR. But I'm getting some odd behavior on my pages, at least in IE 6, 7 and 8. I only really wanted to use sIFR because my fonts looked crummy on my computer in Firefox. My question is: if you use sIFR, when do you use sIFR? In which cases do you disable sIFR? When is it better to just use the browser font?
ANSWER:
You use sIFR moderately, say for headlines. Try not to use it for links, because links in Flash don't work as well as normal HTML links. It also makes little sense to use sIFR only for text that never changes, an image would work a lot better. I haven't heard about the upside-down problem in a few years now, but in any case, that's an issue with IE 6 and (an old?) Flash player. In any case, it always makes sense to test thoroughly. Also, did you look into sIFR 3 lately? It's much improved over v2. | [
"cross-browser",
"sifr"
] | 2 | 3 | 669 | 3 | 0 | 2008-08-20T13:26:04.020000 | 2008-09-18T12:48:35.790000 |
17,944 | 17,974 | How to round up the result of integer division? | I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java. If I have x items which I want to display in chunks of y per page, how many pages will be needed? | Found an elegant solution: int pageCount = (records + recordsPerPage - 1) / recordsPerPage; Source: Number Conversion, Roland Backhouse, 2001 | How to round up the result of integer division? I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java. If I have x items which I want to display in chunks of y per page, how many pages will be needed? | TITLE:
How to round up the result of integer division?
QUESTION:
I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java. If I have x items which I want to display in chunks of y per page, how many pages will be needed?
ANSWER:
Found an elegant solution: int pageCount = (records + recordsPerPage - 1) / recordsPerPage; Source: Number Conversion, Roland Backhouse, 2001 | [
"java",
"c#",
"c++",
"math"
] | 402 | 585 | 280,297 | 19 | 0 | 2008-08-20T13:27:40.210000 | 2008-08-20T13:42:08.257000 |
17,947 | 17,978 | C# Corrupt Memory Error | I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is: System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: UPDATE: Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now. | List of some possibilities: An object is being used after it has been disposed. This can happen a lot if you are disposing managed object in a finalizer (you should not do that). An unmannaged implementation of one of the object you are using is bugged and it corrupted the process memory heap. Happens a lot with DirectX, GDI and others. Mashaling on managed-unmanaged boundary is flawed. Make sure you pin a managed pointer before you use it on an unmanaged part of code. You are using unsafe block and doing funny stuff with it. In you case it could be a problem with Windows Forms. But the problem is not that it is happening, but rather that it is not being reported correctly; you possibly still have done something wrong. Are you able to determine what control is causing the error using the HWND? Is it always the same? Is this control doing something funny just before the application crashes? Is the unmannaged part of the control a custom window or a standard control? | C# Corrupt Memory Error I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is: System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: UPDATE: Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now. | TITLE:
C# Corrupt Memory Error
QUESTION:
I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is: System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: UPDATE: Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now.
ANSWER:
List of some possibilities: An object is being used after it has been disposed. This can happen a lot if you are disposing managed object in a finalizer (you should not do that). An unmannaged implementation of one of the object you are using is bugged and it corrupted the process memory heap. Happens a lot with DirectX, GDI and others. Mashaling on managed-unmanaged boundary is flawed. Make sure you pin a managed pointer before you use it on an unmanaged part of code. You are using unsafe block and doing funny stuff with it. In you case it could be a problem with Windows Forms. But the problem is not that it is happening, but rather that it is not being reported correctly; you possibly still have done something wrong. Are you able to determine what control is causing the error using the HWND? Is it always the same? Is this control doing something funny just before the application crashes? Is the unmannaged part of the control a custom window or a standard control? | [
"c#",
"voip"
] | 2 | 3 | 3,261 | 3 | 0 | 2008-08-20T13:28:57.680000 | 2008-08-20T13:44:22.980000 |
17,948 | 18,056 | Automating WSDL.exe in a Custom Build | I have a web application written in C# that consumes several internal web services. We have a development tier, a testing tier, and a production tier. Also, we use the WSDL.exe command to generate a Proxies.cs file for a given tier's web services. When we are ready to deploy our code up the stack from development to test or test to production, we need to run the WSDL.exe command to point to the appropriate version of the web services. Is there a generally accepted way to automate this? | There are a number of way to do it. A NAnt build script will do it, but I think the most commonly accepted method now is to use MSBuild. See MSDN for details. | Automating WSDL.exe in a Custom Build I have a web application written in C# that consumes several internal web services. We have a development tier, a testing tier, and a production tier. Also, we use the WSDL.exe command to generate a Proxies.cs file for a given tier's web services. When we are ready to deploy our code up the stack from development to test or test to production, we need to run the WSDL.exe command to point to the appropriate version of the web services. Is there a generally accepted way to automate this? | TITLE:
Automating WSDL.exe in a Custom Build
QUESTION:
I have a web application written in C# that consumes several internal web services. We have a development tier, a testing tier, and a production tier. Also, we use the WSDL.exe command to generate a Proxies.cs file for a given tier's web services. When we are ready to deploy our code up the stack from development to test or test to production, we need to run the WSDL.exe command to point to the appropriate version of the web services. Is there a generally accepted way to automate this?
ANSWER:
There are a number of way to do it. A NAnt build script will do it, but I think the most commonly accepted method now is to use MSBuild. See MSDN for details. | [
"asp.net",
"build-process",
"wsdl"
] | 1 | 2 | 269 | 2 | 0 | 2008-08-20T13:29:50.143000 | 2008-08-20T14:09:52.497000 |
17,960 | 5,625,350 | PowerShell App.Config | Has anyone worked out how to get PowerShell to use app.config files? I have a couple of.NET DLL's I'd like to use in one of my scripts but they expect their own config sections to be present in app.config / web.config. | Cross-referencing with this thread, which helped me with the same question: Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script I added the following to my script, before invoking the DLL that needs config settings, where $configpath is the location of the file I want to load: [appdomain]::CurrentDomain.SetData("APP_CONFIG_FILE", $configpath) Add-Type -AssemblyName System.Configuration See this post to ensure the configuration file specified is applied to the running context. | PowerShell App.Config Has anyone worked out how to get PowerShell to use app.config files? I have a couple of.NET DLL's I'd like to use in one of my scripts but they expect their own config sections to be present in app.config / web.config. | TITLE:
PowerShell App.Config
QUESTION:
Has anyone worked out how to get PowerShell to use app.config files? I have a couple of.NET DLL's I'd like to use in one of my scripts but they expect their own config sections to be present in app.config / web.config.
ANSWER:
Cross-referencing with this thread, which helped me with the same question: Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script I added the following to my script, before invoking the DLL that needs config settings, where $configpath is the location of the file I want to load: [appdomain]::CurrentDomain.SetData("APP_CONFIG_FILE", $configpath) Add-Type -AssemblyName System.Configuration See this post to ensure the configuration file specified is applied to the running context. | [
"powershell",
"configuration-files"
] | 27 | 39 | 21,453 | 3 | 0 | 2008-08-20T13:33:58.057000 | 2011-04-11T17:52:41.520000 |
17,965 | 17,994 | How to generate a core dump in Linux on a segmentation fault? | I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails? | This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type ulimit -c unlimited then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you. In tcsh, you'd type limit coredumpsize unlimited | How to generate a core dump in Linux on a segmentation fault? I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails? | TITLE:
How to generate a core dump in Linux on a segmentation fault?
QUESTION:
I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
ANSWER:
This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type ulimit -c unlimited then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you. In tcsh, you'd type limit coredumpsize unlimited | [
"linux",
"bash",
"unix",
"coredump",
"tcsh"
] | 254 | 278 | 479,750 | 13 | 0 | 2008-08-20T13:38:14.287000 | 2008-08-20T13:50:47.143000 |
17,966 | 26,573 | Add a shortcut to Startup folder with parameters in Adobe AIR | I am trying to include a link to my application in the Startup folder with a parameter passed to the program. I think it would work if I created the shortcut locally and then added it to my source. After that I could copy it to the Startup folder on first run. File.userDirectory.resolvePath("Start Menu\\Programs\\Startup\\startup.lnk"); However, I am trying to get this to occur during install. I see there is are some settings related to the installation in app.xml, but nothing that lets me install it to two folders, or use a parameter. | I'm new to Air, but also haven't found any way to customize the install process. It looks like you're limited to your application code. (Updating appears more flexible.) From your example, it looks like you want your app' to run with a parameter constant each time Windows starts. So you're probably already aware you can set: NativeApplication.nativeApplication.startAtLogin=true when your app' first runs. Could you combine this with your parameter in a settings file in the application or user directory and accomplish what you need? | Add a shortcut to Startup folder with parameters in Adobe AIR I am trying to include a link to my application in the Startup folder with a parameter passed to the program. I think it would work if I created the shortcut locally and then added it to my source. After that I could copy it to the Startup folder on first run. File.userDirectory.resolvePath("Start Menu\\Programs\\Startup\\startup.lnk"); However, I am trying to get this to occur during install. I see there is are some settings related to the installation in app.xml, but nothing that lets me install it to two folders, or use a parameter. | TITLE:
Add a shortcut to Startup folder with parameters in Adobe AIR
QUESTION:
I am trying to include a link to my application in the Startup folder with a parameter passed to the program. I think it would work if I created the shortcut locally and then added it to my source. After that I could copy it to the Startup folder on first run. File.userDirectory.resolvePath("Start Menu\\Programs\\Startup\\startup.lnk"); However, I am trying to get this to occur during install. I see there is are some settings related to the installation in app.xml, but nothing that lets me install it to two folders, or use a parameter.
ANSWER:
I'm new to Air, but also haven't found any way to customize the install process. It looks like you're limited to your application code. (Updating appears more flexible.) From your example, it looks like you want your app' to run with a parameter constant each time Windows starts. So you're probably already aware you can set: NativeApplication.nativeApplication.startAtLogin=true when your app' first runs. Could you combine this with your parameter in a settings file in the application or user directory and accomplish what you need? | [
"air",
"adobe"
] | 2 | 2 | 2,034 | 1 | 0 | 2008-08-20T13:38:53.833000 | 2008-08-25T18:20:59.420000 |
17,980 | 17,989 | What Comes After The %? | I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example: double radius = 1.0; double area = 0.0; area = calculateArea( radius ); printf( "%10.1f %10.2\n", radius, area ); I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this? | http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc). Hope this helps UPDATE: To clarify using your examples: printf( "%10.1f %10.2\n", radius, area ); %10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place. %10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places. | What Comes After The %? I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example: double radius = 1.0; double area = 0.0; area = calculateArea( radius ); printf( "%10.1f %10.2\n", radius, area ); I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this? | TITLE:
What Comes After The %?
QUESTION:
I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example: double radius = 1.0; double area = 0.0; area = calculateArea( radius ); printf( "%10.1f %10.2\n", radius, area ); I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this?
ANSWER:
http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc). Hope this helps UPDATE: To clarify using your examples: printf( "%10.1f %10.2\n", radius, area ); %10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place. %10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places. | [
"c"
] | 5 | 15 | 3,955 | 8 | 0 | 2008-08-20T13:44:34.320000 | 2008-08-20T13:50:05.570000 |
17,984 | 19,549 | AnkhSVN Cannot Connect Due to Proxy | Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work? | You can also use TortoiseSVN for editting the proxy settings. TortoiseSVN saves the settings in the registry in the common location that all Subversion clients (by default) use. UPDATE: A proxy settings dialog is now implemented in the AnkhSVN daily builds. It will be available in the next release. | AnkhSVN Cannot Connect Due to Proxy Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work? | TITLE:
AnkhSVN Cannot Connect Due to Proxy
QUESTION:
Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work?
ANSWER:
You can also use TortoiseSVN for editting the proxy settings. TortoiseSVN saves the settings in the registry in the common location that all Subversion clients (by default) use. UPDATE: A proxy settings dialog is now implemented in the AnkhSVN daily builds. It will be available in the next release. | [
"visual-studio",
"svn",
"visual-studio-2005",
"proxy",
"ankhsvn"
] | 6 | 5 | 2,167 | 2 | 0 | 2008-08-20T13:45:48.077000 | 2008-08-21T09:51:57.007000 |
18,006 | 18,033 | Recommendations for a .NET component to access an email inbox | I've been asked to write a Windows service in C# to periodically monitor an email inbox and insert the details of any messages received into a database table. My instinct is to do this via POP3 and sure enough, Googling for ".NET POP3 component" produces countless (ok, 146,000) results. Has anybody done anything similar before and can you recommend a decent component that won't break the bank (a few hundred dollars maximum)? Would there be any benefits to using IMAP rather than POP3? | I recomment chilkat. They have pretty stable components, and you can get their email component for as cheap as $99 for a single developer. Personally, I think going with the whole package of components is a better deal, as it's only $289, and comes with many useful components. I'm not affiliated with them in any way, although I probably sound like I am. | Recommendations for a .NET component to access an email inbox I've been asked to write a Windows service in C# to periodically monitor an email inbox and insert the details of any messages received into a database table. My instinct is to do this via POP3 and sure enough, Googling for ".NET POP3 component" produces countless (ok, 146,000) results. Has anybody done anything similar before and can you recommend a decent component that won't break the bank (a few hundred dollars maximum)? Would there be any benefits to using IMAP rather than POP3? | TITLE:
Recommendations for a .NET component to access an email inbox
QUESTION:
I've been asked to write a Windows service in C# to periodically monitor an email inbox and insert the details of any messages received into a database table. My instinct is to do this via POP3 and sure enough, Googling for ".NET POP3 component" produces countless (ok, 146,000) results. Has anybody done anything similar before and can you recommend a decent component that won't break the bank (a few hundred dollars maximum)? Would there be any benefits to using IMAP rather than POP3?
ANSWER:
I recomment chilkat. They have pretty stable components, and you can get their email component for as cheap as $99 for a single developer. Personally, I think going with the whole package of components is a better deal, as it's only $289, and comes with many useful components. I'm not affiliated with them in any way, although I probably sound like I am. | [
".net",
"email",
"imap",
"pop3"
] | 16 | 4 | 14,827 | 11 | 0 | 2008-08-20T13:53:20.647000 | 2008-08-20T14:01:29.303000 |
18,010 | 18,027 | Is AnkhSVN any good? | I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times. What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool. | Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts. The newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks & works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me. The only gripe I have with 2.0 is the fact that it slaps its footprint to.sln files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not. addendum: I have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add: No ugly crashes that plagued v1.x. Yay! For some reason, "Show Changes" (diff) windows are limited to only two. Meh. Diff windows do not allow editing/reverting yet. Boo! Updates, commits and browsing are MUCH faster than Tortoise. Yay! All in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise. | Is AnkhSVN any good? I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times. What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool. | TITLE:
Is AnkhSVN any good?
QUESTION:
I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times. What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool.
ANSWER:
Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts. The newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks & works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me. The only gripe I have with 2.0 is the fact that it slaps its footprint to.sln files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not. addendum: I have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add: No ugly crashes that plagued v1.x. Yay! For some reason, "Show Changes" (diff) windows are limited to only two. Meh. Diff windows do not allow editing/reverting yet. Boo! Updates, commits and browsing are MUCH faster than Tortoise. Yay! All in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise. | [
"version-control",
"ankhsvn"
] | 24 | 22 | 7,699 | 12 | 0 | 2008-08-20T13:54:31.800000 | 2008-08-20T14:00:26.013000 |
18,034 | 18,062 | How do I create a self signed SSL certificate to use while testing a web app | How do I create a self signed SSL certificate for an Apache Server to use while testing a web app? | How do I create a self-signed SSL Certificate for testing purposes? from http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert: Make sure OpenSSL is installed and in your PATH. Run the following command, to create server.key and server.crt files: openssl req -new -x509 -nodes -out server.crt -keyout server.key These can be used as follows in your httpd.conf file: SSLCertificateFile /path/to/this/server.crt SSLCertificateKeyFile /path/to/this/server.key It is important that you are aware that this server.key does not have any passphrase. To add a passphrase to the key, you should run the following command, and enter & verify the passphrase as requested. openssl rsa -des3 -in server.key -out server.key.new mv server.key.new server.key Please backup the server.key file, and the passphrase you entered, in a secure location. | How do I create a self signed SSL certificate to use while testing a web app How do I create a self signed SSL certificate for an Apache Server to use while testing a web app? | TITLE:
How do I create a self signed SSL certificate to use while testing a web app
QUESTION:
How do I create a self signed SSL certificate for an Apache Server to use while testing a web app?
ANSWER:
How do I create a self-signed SSL Certificate for testing purposes? from http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert: Make sure OpenSSL is installed and in your PATH. Run the following command, to create server.key and server.crt files: openssl req -new -x509 -nodes -out server.crt -keyout server.key These can be used as follows in your httpd.conf file: SSLCertificateFile /path/to/this/server.crt SSLCertificateKeyFile /path/to/this/server.key It is important that you are aware that this server.key does not have any passphrase. To add a passphrase to the key, you should run the following command, and enter & verify the passphrase as requested. openssl rsa -des3 -in server.key -out server.key.new mv server.key.new server.key Please backup the server.key file, and the passphrase you entered, in a secure location. | [
"apache",
"ssl"
] | 23 | 30 | 13,552 | 4 | 0 | 2008-08-20T14:01:43.280000 | 2008-08-20T14:11:40.883000 |
18,059 | 1,493,026 | Prevent WebBrowser control from swallowing exceptions | I'm using the System.Windows.Forms.WebBrowser, to make a view a-la Visual Studio Start Page. However, it seems the control is catching and handling all exceptions by silently sinking them! No need to tell this is a very unfortunate behaviour. void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e) { // WebBrowser.Navigating event handler throw new Exception("OMG!"); } The code above will cancel navigation and swallow the exception. void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e) { // WebBrowser.Navigating event handler try { e.Cancel = true; if (actions.ContainsKey(e.Url.ToString())) { actions[e.Url.ToString()].Invoke(e.Url, webBrowser.Document); } } catch (Exception exception) { MessageBox.Show(exception.ToString()); } } So, what I do (above) is catch all exceptions and pop a box, this is better than silently failing but still clearly far from ideal. I'd like it to redirect the exception through the normal application failure path so that it ultimately becomes unhandled, or handled by the application from the root. Is there any way to tell the WebBrowser control to stop sinking the exceptions and just forward them the natural and expected way? Or is there some hacky way to throw an exception through native boundaries? | My best bet why it happens is because there is a native-managed-native boundary to cross. The native part doesn't forward the managed exceptions correctly and there is not much that can be done. I am still hoping for a better answer though. | Prevent WebBrowser control from swallowing exceptions I'm using the System.Windows.Forms.WebBrowser, to make a view a-la Visual Studio Start Page. However, it seems the control is catching and handling all exceptions by silently sinking them! No need to tell this is a very unfortunate behaviour. void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e) { // WebBrowser.Navigating event handler throw new Exception("OMG!"); } The code above will cancel navigation and swallow the exception. void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e) { // WebBrowser.Navigating event handler try { e.Cancel = true; if (actions.ContainsKey(e.Url.ToString())) { actions[e.Url.ToString()].Invoke(e.Url, webBrowser.Document); } } catch (Exception exception) { MessageBox.Show(exception.ToString()); } } So, what I do (above) is catch all exceptions and pop a box, this is better than silently failing but still clearly far from ideal. I'd like it to redirect the exception through the normal application failure path so that it ultimately becomes unhandled, or handled by the application from the root. Is there any way to tell the WebBrowser control to stop sinking the exceptions and just forward them the natural and expected way? Or is there some hacky way to throw an exception through native boundaries? | TITLE:
Prevent WebBrowser control from swallowing exceptions
QUESTION:
I'm using the System.Windows.Forms.WebBrowser, to make a view a-la Visual Studio Start Page. However, it seems the control is catching and handling all exceptions by silently sinking them! No need to tell this is a very unfortunate behaviour. void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e) { // WebBrowser.Navigating event handler throw new Exception("OMG!"); } The code above will cancel navigation and swallow the exception. void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e) { // WebBrowser.Navigating event handler try { e.Cancel = true; if (actions.ContainsKey(e.Url.ToString())) { actions[e.Url.ToString()].Invoke(e.Url, webBrowser.Document); } } catch (Exception exception) { MessageBox.Show(exception.ToString()); } } So, what I do (above) is catch all exceptions and pop a box, this is better than silently failing but still clearly far from ideal. I'd like it to redirect the exception through the normal application failure path so that it ultimately becomes unhandled, or handled by the application from the root. Is there any way to tell the WebBrowser control to stop sinking the exceptions and just forward them the natural and expected way? Or is there some hacky way to throw an exception through native boundaries?
ANSWER:
My best bet why it happens is because there is a native-managed-native boundary to cross. The native part doesn't forward the managed exceptions correctly and there is not much that can be done. I am still hoping for a better answer though. | [
".net",
"winforms",
"exception",
"webbrowser-control"
] | 3 | 0 | 1,638 | 3 | 0 | 2008-08-20T14:10:39.810000 | 2009-09-29T14:47:38.233000 |
18,077 | 19,329 | The best way of checking for -moz-border-radius support | I wanted some of those spiffy rounded corners for a web project that I'm currently working on. I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly. I already utilize jQuery so I looked at the excellent rounded corners plugin and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs. I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific -moz-border-radius-* properties and if so utilize them. The check for webkit support looks like this: var webkitAvailable = false; try { webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius']!= undefined); } catch(err) {} That, however, did not work for -moz-border-radius so I started checking for alternatives. My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse. My best solution yet is as follows. var mozborderAvailable = false; try { var o = jQuery(' ').css('-moz-border-radius', '1px'); mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px'; o = null; } catch(err) {} It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties -moz-border-radius-topleft -moz-border-radius-topright -moz-border-radius-bottomleft -moz-border-radius-bottomright Is there any javascript/CSS guru out there that have a better solution? (The feature request for this page is at http://plugins.jquery.com/node/3619 ) | How about this? var mozborderAvailable = false; try { if (typeof(document.body.style.MozBorderRadius)!== "undefined") { mozborderAvailable = true; } } catch(err) {} I tested it in Firefox 3 (true) and false in: Safari, IE7, and Opera. (Edit: better undefined test) | The best way of checking for -moz-border-radius support I wanted some of those spiffy rounded corners for a web project that I'm currently working on. I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly. I already utilize jQuery so I looked at the excellent rounded corners plugin and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs. I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific -moz-border-radius-* properties and if so utilize them. The check for webkit support looks like this: var webkitAvailable = false; try { webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius']!= undefined); } catch(err) {} That, however, did not work for -moz-border-radius so I started checking for alternatives. My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse. My best solution yet is as follows. var mozborderAvailable = false; try { var o = jQuery(' ').css('-moz-border-radius', '1px'); mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px'; o = null; } catch(err) {} It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties -moz-border-radius-topleft -moz-border-radius-topright -moz-border-radius-bottomleft -moz-border-radius-bottomright Is there any javascript/CSS guru out there that have a better solution? (The feature request for this page is at http://plugins.jquery.com/node/3619 ) | TITLE:
The best way of checking for -moz-border-radius support
QUESTION:
I wanted some of those spiffy rounded corners for a web project that I'm currently working on. I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly. I already utilize jQuery so I looked at the excellent rounded corners plugin and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs. I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific -moz-border-radius-* properties and if so utilize them. The check for webkit support looks like this: var webkitAvailable = false; try { webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius']!= undefined); } catch(err) {} That, however, did not work for -moz-border-radius so I started checking for alternatives. My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse. My best solution yet is as follows. var mozborderAvailable = false; try { var o = jQuery(' ').css('-moz-border-radius', '1px'); mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px'; o = null; } catch(err) {} It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties -moz-border-radius-topleft -moz-border-radius-topright -moz-border-radius-bottomleft -moz-border-radius-bottomright Is there any javascript/CSS guru out there that have a better solution? (The feature request for this page is at http://plugins.jquery.com/node/3619 )
ANSWER:
How about this? var mozborderAvailable = false; try { if (typeof(document.body.style.MozBorderRadius)!== "undefined") { mozborderAvailable = true; } } catch(err) {} I tested it in Firefox 3 (true) and false in: Safari, IE7, and Opera. (Edit: better undefined test) | [
"javascript",
"css"
] | 12 | 11 | 6,917 | 7 | 0 | 2008-08-20T14:18:33.287000 | 2008-08-21T04:20:47.817000 |
18,082 | 1,830,844 | Validate decimal numbers in JavaScript - IsNumeric() | What's the cleanest, most effective way to validate decimal numbers in JavaScript? Bonus points for: Clarity. Solution should be clean and simple. Cross-platform. Test cases: 01. IsNumeric('-1') => true 02. IsNumeric('-1.5') => true 03. IsNumeric('0') => true 04. IsNumeric('0.42') => true 05. IsNumeric('.42') => true 06. IsNumeric('99,999') => false 07. IsNumeric('0x89f') => false 08. IsNumeric('#abcdef') => false 09. IsNumeric('1.2.3') => false 10. IsNumeric('') => false 11. IsNumeric('blah') => false | @Joel's answer is pretty close, but it will fail in the following cases: // Whitespace strings: IsNumeric(' ') == true; IsNumeric('\t\t') == true; IsNumeric('\n\r') == true;
// Number literals: IsNumeric(-1) == false; IsNumeric(0) == false; IsNumeric(1.1) == false; IsNumeric(8e5) == false; Some time ago I had to implement an IsNumeric function, to find out if a variable contained a numeric value, regardless of its type, it could be a String containing a numeric value (I had to consider also exponential notation, etc.), a Number object, virtually anything could be passed to that function, I couldn't make any type assumptions, taking care of type coercion (eg. +true == 1; but true shouldn't be considered as "numeric" ). I think is worth sharing this set of +30 unit tests made to numerous function implementations, and also share the one that passes all my tests: function isNumeric(n) { return!isNaN(parseFloat(n)) && isFinite(n); } P.S. isNaN & isFinite have a confusing behavior due to forced conversion to number. In ES6, Number.isNaN & Number.isFinite would fix these issues. Keep that in mind when using them. Update: Here's how jQuery does it now (2.2-stable): isNumeric: function(obj) { var realStringObj = obj && obj.toString(); return!jQuery.isArray(obj) && (realStringObj - parseFloat(realStringObj) + 1) >= 0; } Update: Angular 4.3: export function isNumeric(value: any): boolean { return!isNaN(value - parseFloat(value)); } | Validate decimal numbers in JavaScript - IsNumeric() What's the cleanest, most effective way to validate decimal numbers in JavaScript? Bonus points for: Clarity. Solution should be clean and simple. Cross-platform. Test cases: 01. IsNumeric('-1') => true 02. IsNumeric('-1.5') => true 03. IsNumeric('0') => true 04. IsNumeric('0.42') => true 05. IsNumeric('.42') => true 06. IsNumeric('99,999') => false 07. IsNumeric('0x89f') => false 08. IsNumeric('#abcdef') => false 09. IsNumeric('1.2.3') => false 10. IsNumeric('') => false 11. IsNumeric('blah') => false | TITLE:
Validate decimal numbers in JavaScript - IsNumeric()
QUESTION:
What's the cleanest, most effective way to validate decimal numbers in JavaScript? Bonus points for: Clarity. Solution should be clean and simple. Cross-platform. Test cases: 01. IsNumeric('-1') => true 02. IsNumeric('-1.5') => true 03. IsNumeric('0') => true 04. IsNumeric('0.42') => true 05. IsNumeric('.42') => true 06. IsNumeric('99,999') => false 07. IsNumeric('0x89f') => false 08. IsNumeric('#abcdef') => false 09. IsNumeric('1.2.3') => false 10. IsNumeric('') => false 11. IsNumeric('blah') => false
ANSWER:
@Joel's answer is pretty close, but it will fail in the following cases: // Whitespace strings: IsNumeric(' ') == true; IsNumeric('\t\t') == true; IsNumeric('\n\r') == true;
// Number literals: IsNumeric(-1) == false; IsNumeric(0) == false; IsNumeric(1.1) == false; IsNumeric(8e5) == false; Some time ago I had to implement an IsNumeric function, to find out if a variable contained a numeric value, regardless of its type, it could be a String containing a numeric value (I had to consider also exponential notation, etc.), a Number object, virtually anything could be passed to that function, I couldn't make any type assumptions, taking care of type coercion (eg. +true == 1; but true shouldn't be considered as "numeric" ). I think is worth sharing this set of +30 unit tests made to numerous function implementations, and also share the one that passes all my tests: function isNumeric(n) { return!isNaN(parseFloat(n)) && isFinite(n); } P.S. isNaN & isFinite have a confusing behavior due to forced conversion to number. In ES6, Number.isNaN & Number.isFinite would fix these issues. Keep that in mind when using them. Update: Here's how jQuery does it now (2.2-stable): isNumeric: function(obj) { var realStringObj = obj && obj.toString(); return!jQuery.isArray(obj) && (realStringObj - parseFloat(realStringObj) + 1) >= 0; } Update: Angular 4.3: export function isNumeric(value: any): boolean { return!isNaN(value - parseFloat(value)); } | [
"javascript",
"validation",
"numbers"
] | 2,563 | 2,995 | 1,573,844 | 53 | 0 | 2008-08-20T14:21:13.793000 | 2009-12-02T05:36:22.517000 |
18,093 | 18,877 | Modifying Cruise Control.NET | We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process. Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis? The dashboard is a.NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with. Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =) | Why do you need to? Do you really need to limit users in the way with an integration server. I think that's why CC.Net doesn't have that sort of support built in. You can always see who forced a build, and control it that way. I find that continuous integration works best with regular builds and regular unit test runs (our rather large C# app + test run takes 25 mins and checks hourly), so for me forcing a build is rarely an issue. If you want some users to have some kind of report-only access you could limit them so that they can't access the CC.Net web application at all. All the results (MSBuild, NCover, NUnit, FxCop, etc) are in XML, so you can build relativity simple report pages out of XSLT. | Modifying Cruise Control.NET We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process. Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis? The dashboard is a.NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with. Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =) | TITLE:
Modifying Cruise Control.NET
QUESTION:
We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process. Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis? The dashboard is a.NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with. Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =)
ANSWER:
Why do you need to? Do you really need to limit users in the way with an integration server. I think that's why CC.Net doesn't have that sort of support built in. You can always see who forced a build, and control it that way. I find that continuous integration works best with regular builds and regular unit test runs (our rather large C# app + test run takes 25 mins and checks hourly), so for me forcing a build is rarely an issue. If you want some users to have some kind of report-only access you could limit them so that they can't access the CC.Net web application at all. All the results (MSBuild, NCover, NUnit, FxCop, etc) are in XML, so you can build relativity simple report pages out of XSLT. | [
"cruisecontrol.net",
"nvelocity"
] | 1 | 2 | 484 | 2 | 0 | 2008-08-20T14:24:33.770000 | 2008-08-20T21:19:05.397000 |
18,097 | 18,109 | In C#, do you need to call the base constructor? | In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? class BaseClass { public BaseClass() { //... some code } } class MyClass: BaseClass { public MyClass() // Do I need to put ": base()" here or is it implied? { //... some code } } | You do not need to explicitly call the base constructor, it will be implicitly called. Extend your example a little and create a Console Application and you can verify this behaviour for yourself: using System;
namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyClass foo = new MyClass();
Console.ReadLine(); } }
class BaseClass { public BaseClass() { Console.WriteLine("BaseClass constructor called."); } }
class MyClass: BaseClass { public MyClass() { Console.WriteLine("MyClass constructor called."); } } } | In C#, do you need to call the base constructor? In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? class BaseClass { public BaseClass() { //... some code } } class MyClass: BaseClass { public MyClass() // Do I need to put ": base()" here or is it implied? { //... some code } } | TITLE:
In C#, do you need to call the base constructor?
QUESTION:
In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? class BaseClass { public BaseClass() { //... some code } } class MyClass: BaseClass { public MyClass() // Do I need to put ": base()" here or is it implied? { //... some code } }
ANSWER:
You do not need to explicitly call the base constructor, it will be implicitly called. Extend your example a little and create a Console Application and you can verify this behaviour for yourself: using System;
namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyClass foo = new MyClass();
Console.ReadLine(); } }
class BaseClass { public BaseClass() { Console.WriteLine("BaseClass constructor called."); } }
class MyClass: BaseClass { public MyClass() { Console.WriteLine("MyClass constructor called."); } } } | [
"c#",
"inheritance",
"constructor"
] | 61 | 67 | 23,968 | 7 | 0 | 2008-08-20T14:26:32.910000 | 2008-08-20T14:32:49.883000 |
18,119 | 18,490 | How to keep track of the references to an object? | In a world where manual memory allocation and pointers still rule (Borland Delphi) I need a general solution for what I think is a general problem: At a given moment an object can be referenced from multiple places (lists, other objects,...). Is there a good way to keep track of all these references so that I can update them when the object is destroyed? | If you want to notify others of changes you should implement the "Observer Pattern". Delphi has already done that for you for TComponent descendants. You can call the TComponent.FreeNotification method and have your object be notified when the other component gets destroyed. It does that by calling the Notification method. You can remove yourself from the notification list by calling TComponent.RemoveFreeNotification. Also see this page. Most Garbage Collectors do not let you get a list of references, so they won't help in this case. Delphi can do reference counting if you would use interfaces, but then again you need to keep track of the references yourself. | How to keep track of the references to an object? In a world where manual memory allocation and pointers still rule (Borland Delphi) I need a general solution for what I think is a general problem: At a given moment an object can be referenced from multiple places (lists, other objects,...). Is there a good way to keep track of all these references so that I can update them when the object is destroyed? | TITLE:
How to keep track of the references to an object?
QUESTION:
In a world where manual memory allocation and pointers still rule (Borland Delphi) I need a general solution for what I think is a general problem: At a given moment an object can be referenced from multiple places (lists, other objects,...). Is there a good way to keep track of all these references so that I can update them when the object is destroyed?
ANSWER:
If you want to notify others of changes you should implement the "Observer Pattern". Delphi has already done that for you for TComponent descendants. You can call the TComponent.FreeNotification method and have your object be notified when the other component gets destroyed. It does that by calling the Notification method. You can remove yourself from the notification list by calling TComponent.RemoveFreeNotification. Also see this page. Most Garbage Collectors do not let you get a list of references, so they won't help in this case. Delphi can do reference counting if you would use interfaces, but then again you need to keep track of the references yourself. | [
"oop",
"delphi"
] | 4 | 3 | 1,590 | 4 | 0 | 2008-08-20T14:39:12.923000 | 2008-08-20T18:18:02.273000 |
18,166 | 18,287 | cURL adding whitespace to post content? | I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response: $request = trim(file_get_contents('test.xml')); $curlHandle = curl_init($servletURL); curl_setopt($curlHandle, CURLOPT_POST, TRUE); curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out $response = curl_exec($curlHandle); That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating: Content not allowed in prolog I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out: POST {serverURL} HTTP/1.1 Host: {ip of server}:8080 Accept: */* Content-Length: 921 Expect: 100-continue Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f
---------------------------01e7cda3896f Content-Disposition: form-data; name="XML"
[SNIP - the XML was displayed]
---------------------------01e7cda3896f-- Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against? | It turns out it's an encoding issue. The app apparently needs the XML in www-form-urlencoded instead of form-data so I had to change: # This sets the encoding to multipart/form-data curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); to # This sets it to application/x-www-form-urlencoded curl_setopt($curlHandle, CURLOPT_POSTFIELDS, 'XML='. urlencode($request)); | cURL adding whitespace to post content? I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response: $request = trim(file_get_contents('test.xml')); $curlHandle = curl_init($servletURL); curl_setopt($curlHandle, CURLOPT_POST, TRUE); curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out $response = curl_exec($curlHandle); That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating: Content not allowed in prolog I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out: POST {serverURL} HTTP/1.1 Host: {ip of server}:8080 Accept: */* Content-Length: 921 Expect: 100-continue Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f
---------------------------01e7cda3896f Content-Disposition: form-data; name="XML"
[SNIP - the XML was displayed]
---------------------------01e7cda3896f-- Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against? | TITLE:
cURL adding whitespace to post content?
QUESTION:
I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response: $request = trim(file_get_contents('test.xml')); $curlHandle = curl_init($servletURL); curl_setopt($curlHandle, CURLOPT_POST, TRUE); curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out $response = curl_exec($curlHandle); That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating: Content not allowed in prolog I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out: POST {serverURL} HTTP/1.1 Host: {ip of server}:8080 Accept: */* Content-Length: 921 Expect: 100-continue Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f
---------------------------01e7cda3896f Content-Disposition: form-data; name="XML"
[SNIP - the XML was displayed]
---------------------------01e7cda3896f-- Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against?
ANSWER:
It turns out it's an encoding issue. The app apparently needs the XML in www-form-urlencoded instead of form-data so I had to change: # This sets the encoding to multipart/form-data curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); to # This sets it to application/x-www-form-urlencoded curl_setopt($curlHandle, CURLOPT_POSTFIELDS, 'XML='. urlencode($request)); | [
"php",
"xml",
"curl"
] | 4 | 2 | 3,614 | 3 | 0 | 2008-08-20T15:02:49.233000 | 2008-08-20T16:11:31.750000 |
18,172 | 19,606 | Copying Files over an Intermittent Network Connection | I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls. Thanks! | I'm unclear as to what your actual problem is, so I'll throw out a few thoughts. Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE Do you want verifiable copies? Sounds like you already have that by verifying hashes. Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help. Do you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me. Do you want to know when the network comes back? IsNetworkAlive has your answer. Based on what I know so far, I think the following pseudo-code would be my approach: sourceFile = Compress("*.*"); destFile = "X:\files.zip";
int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE; while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) { do { // optionally, increment a failed counter to break out at some point Sleep(1000); while (!IsNetworkAlive(NETWORKALIVELAN)); } Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side. | Copying Files over an Intermittent Network Connection I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls. Thanks! | TITLE:
Copying Files over an Intermittent Network Connection
QUESTION:
I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls. Thanks!
ANSWER:
I'm unclear as to what your actual problem is, so I'll throw out a few thoughts. Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE Do you want verifiable copies? Sounds like you already have that by verifying hashes. Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help. Do you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me. Do you want to know when the network comes back? IsNetworkAlive has your answer. Based on what I know so far, I think the following pseudo-code would be my approach: sourceFile = Compress("*.*"); destFile = "X:\files.zip";
int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE; while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) { do { // optionally, increment a failed counter to break out at some point Sleep(1000); while (!IsNetworkAlive(NETWORKALIVELAN)); } Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side. | [
"windows",
"vb6",
"network-programming",
"wireless",
"intermittent"
] | 15 | 5 | 5,001 | 8 | 0 | 2008-08-20T15:04:20.930000 | 2008-08-21T10:54:09.727000 |
18,194 | 18,211 | Best way to learn SQL Server | So I'm getting a new job working with databases (Microsoft SQL Server to be precise). I know nothing about SQL much less SQL Server. They said they'd train me, but I want to take some initiative to learn about it on my own to be ahead. Where's the best place to start (tutorials, books, etc)? I want to learn more about the SQL language moreso than any of the fancy point and click stuff. | If you're planning on coding against a sql database using.NET, skip ADO and go directly to Linq. You will NOT miss anything. Oh, also, Joe Celko. If you see his name on an article or a book about SQL, read it. | Best way to learn SQL Server So I'm getting a new job working with databases (Microsoft SQL Server to be precise). I know nothing about SQL much less SQL Server. They said they'd train me, but I want to take some initiative to learn about it on my own to be ahead. Where's the best place to start (tutorials, books, etc)? I want to learn more about the SQL language moreso than any of the fancy point and click stuff. | TITLE:
Best way to learn SQL Server
QUESTION:
So I'm getting a new job working with databases (Microsoft SQL Server to be precise). I know nothing about SQL much less SQL Server. They said they'd train me, but I want to take some initiative to learn about it on my own to be ahead. Where's the best place to start (tutorials, books, etc)? I want to learn more about the SQL language moreso than any of the fancy point and click stuff.
ANSWER:
If you're planning on coding against a sql database using.NET, skip ADO and go directly to Linq. You will NOT miss anything. Oh, also, Joe Celko. If you see his name on an article or a book about SQL, read it. | [
"sql",
"sql-server",
"database"
] | 4 | 5 | 13,250 | 9 | 0 | 2008-08-20T15:14:13.247000 | 2008-08-20T15:21:18.503000 |
18,197 | 18,263 | How do you test the usability of your user interfaces | How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release? We are a small software house, but I am interested in the best practices of how to measure usability. Any help appreciated. | I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat. If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable. If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing. Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress. | How do you test the usability of your user interfaces How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release? We are a small software house, but I am interested in the best practices of how to measure usability. Any help appreciated. | TITLE:
How do you test the usability of your user interfaces
QUESTION:
How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release? We are a small software house, but I am interested in the best practices of how to measure usability. Any help appreciated.
ANSWER:
I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat. If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable. If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing. Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress. | [
"user-interface",
"testing",
"usability"
] | 12 | 8 | 1,242 | 11 | 0 | 2008-08-20T15:15:06.333000 | 2008-08-20T15:49:45.640000 |
18,216 | 18,239 | Query a union table with fields as columns | I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see. I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table. The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK" What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like: ID NAME SPECIALTYPE 1 Red Dragon Archfiend Synchro 1 Red Dragon Archfiend DARK 1 Red Dragon Archfiend Effect and I could tally my results that way. I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like: 1 Red Dragon Archfiend Synchro/DARK/Effect..but I don't know if that's feasible. Help me stackoverflow Kenobi! You're my only hope. | Is this for SQL server? If yes then Concatenate Values From Multiple Rows Into One Column (2000) Concatenate Values From Multiple Rows Into One Column Ordered (2005+) | Query a union table with fields as columns I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see. I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table. The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK" What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like: ID NAME SPECIALTYPE 1 Red Dragon Archfiend Synchro 1 Red Dragon Archfiend DARK 1 Red Dragon Archfiend Effect and I could tally my results that way. I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like: 1 Red Dragon Archfiend Synchro/DARK/Effect..but I don't know if that's feasible. Help me stackoverflow Kenobi! You're my only hope. | TITLE:
Query a union table with fields as columns
QUESTION:
I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see. I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table. The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK" What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like: ID NAME SPECIALTYPE 1 Red Dragon Archfiend Synchro 1 Red Dragon Archfiend DARK 1 Red Dragon Archfiend Effect and I could tally my results that way. I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like: 1 Red Dragon Archfiend Synchro/DARK/Effect..but I don't know if that's feasible. Help me stackoverflow Kenobi! You're my only hope.
ANSWER:
Is this for SQL server? If yes then Concatenate Values From Multiple Rows Into One Column (2000) Concatenate Values From Multiple Rows Into One Column Ordered (2005+) | [
"sql",
"database-design",
"stored-procedures"
] | 2 | 1 | 950 | 5 | 0 | 2008-08-20T15:23:02.400000 | 2008-08-20T15:36:17.123000 |
18,223 | 18,229 | SQL Server 2005 insert trigger not inserting enough records | I have a table in a SQL Server 2005 database with a trigger that is supposed to add a record to a different table whenever a new record is inserted. It seems to work fine, but if I execute an Insert Into on the master table that uses a subquery as the source of the values, the trigger only inserts one record in the other table, even though multiple records were added to the master. I want the trigger to fire for each new record added to the master table. Is that possible in 2005? The insert I'm doing is: INSERT INTO [tblMenuItems] ([ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID]) SELECT [ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID] FROM [IVEEtblMenuItems] Here is what the trigger looks like: CREATE TRIGGER [dbo].[tblMenuItemInsertSecurity] ON [dbo].[tblMenuItems] FOR INSERT AS
Declare @iRoleID int Declare @iMenuItemID int
Select @iMenuItemID = [ID] from Inserted
DECLARE tblUserRoles CURSOR FASTFORWARD FOR SELECT [ID] from tblUserRoles OPEN tblUserRoles FETCH NEXT FROM tblUserRoles INTO @iRoleID
WHILE (@@FetchStatus = 0) BEGIN INSERT INTO tblRestrictedMenuItems( [RoleID], [MenuItemID], [RestrictLevel])
VALUES( @iRoleID, @iMenuItemID, 1)
FETCH NEXT FROM tblUserRoles INTO @iRoleID END
CLOSE tblUserRoles Deallocate tblUserRoles | Your trigger is only using the first row from 'Inserted'. This is a common misunderstanding when dealing with SQL triggers for the first time. The trigger fires per update not per row. For example if you do the following:- update products set title = 'geoff de geoff' this would update all the products, but a trigger on the product table would only fire once. The Inserted 'table' you get in trigger would contain all the rows. You must either loop through Inserted with a cursor, or better join Inserted to the table you are updating. | SQL Server 2005 insert trigger not inserting enough records I have a table in a SQL Server 2005 database with a trigger that is supposed to add a record to a different table whenever a new record is inserted. It seems to work fine, but if I execute an Insert Into on the master table that uses a subquery as the source of the values, the trigger only inserts one record in the other table, even though multiple records were added to the master. I want the trigger to fire for each new record added to the master table. Is that possible in 2005? The insert I'm doing is: INSERT INTO [tblMenuItems] ([ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID]) SELECT [ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID] FROM [IVEEtblMenuItems] Here is what the trigger looks like: CREATE TRIGGER [dbo].[tblMenuItemInsertSecurity] ON [dbo].[tblMenuItems] FOR INSERT AS
Declare @iRoleID int Declare @iMenuItemID int
Select @iMenuItemID = [ID] from Inserted
DECLARE tblUserRoles CURSOR FASTFORWARD FOR SELECT [ID] from tblUserRoles OPEN tblUserRoles FETCH NEXT FROM tblUserRoles INTO @iRoleID
WHILE (@@FetchStatus = 0) BEGIN INSERT INTO tblRestrictedMenuItems( [RoleID], [MenuItemID], [RestrictLevel])
VALUES( @iRoleID, @iMenuItemID, 1)
FETCH NEXT FROM tblUserRoles INTO @iRoleID END
CLOSE tblUserRoles Deallocate tblUserRoles | TITLE:
SQL Server 2005 insert trigger not inserting enough records
QUESTION:
I have a table in a SQL Server 2005 database with a trigger that is supposed to add a record to a different table whenever a new record is inserted. It seems to work fine, but if I execute an Insert Into on the master table that uses a subquery as the source of the values, the trigger only inserts one record in the other table, even though multiple records were added to the master. I want the trigger to fire for each new record added to the master table. Is that possible in 2005? The insert I'm doing is: INSERT INTO [tblMenuItems] ([ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID]) SELECT [ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID] FROM [IVEEtblMenuItems] Here is what the trigger looks like: CREATE TRIGGER [dbo].[tblMenuItemInsertSecurity] ON [dbo].[tblMenuItems] FOR INSERT AS
Declare @iRoleID int Declare @iMenuItemID int
Select @iMenuItemID = [ID] from Inserted
DECLARE tblUserRoles CURSOR FASTFORWARD FOR SELECT [ID] from tblUserRoles OPEN tblUserRoles FETCH NEXT FROM tblUserRoles INTO @iRoleID
WHILE (@@FetchStatus = 0) BEGIN INSERT INTO tblRestrictedMenuItems( [RoleID], [MenuItemID], [RestrictLevel])
VALUES( @iRoleID, @iMenuItemID, 1)
FETCH NEXT FROM tblUserRoles INTO @iRoleID END
CLOSE tblUserRoles Deallocate tblUserRoles
ANSWER:
Your trigger is only using the first row from 'Inserted'. This is a common misunderstanding when dealing with SQL triggers for the first time. The trigger fires per update not per row. For example if you do the following:- update products set title = 'geoff de geoff' this would update all the products, but a trigger on the product table would only fire once. The Inserted 'table' you get in trigger would contain all the rows. You must either loop through Inserted with a cursor, or better join Inserted to the table you are updating. | [
"sql-server"
] | 3 | 4 | 11,789 | 4 | 0 | 2008-08-20T15:26:06.853000 | 2008-08-20T15:31:33.330000 |
18,224 | 18,231 | Whats the best way to securely publish a site post build? | So, in your experience, whats the best way? Is there a secure way that's also scriptable/triggerable in a build automation tool? Edit: I should mention this is windows/.net and I'll be deploying to iis6 | For some projects I use Capistrano to push out to live. It is built on top of ruby and makes deploy script writing super easy and uses ssh. On other projects I have a tiny deploy app that uses bash to do an svn export to a temporary directory and then rsync it over to the live server. You can make rsync use ssh. I greatly prefer the Capistrano method, even if your project isn't in ruby/rails. | Whats the best way to securely publish a site post build? So, in your experience, whats the best way? Is there a secure way that's also scriptable/triggerable in a build automation tool? Edit: I should mention this is windows/.net and I'll be deploying to iis6 | TITLE:
Whats the best way to securely publish a site post build?
QUESTION:
So, in your experience, whats the best way? Is there a secure way that's also scriptable/triggerable in a build automation tool? Edit: I should mention this is windows/.net and I'll be deploying to iis6
ANSWER:
For some projects I use Capistrano to push out to live. It is built on top of ruby and makes deploy script writing super easy and uses ssh. On other projects I have a tiny deploy app that uses bash to do an svn export to a temporary directory and then rsync it over to the live server. You can make rsync use ssh. I greatly prefer the Capistrano method, even if your project isn't in ruby/rails. | [
".net",
"windows",
"deployment",
"iis-6"
] | 6 | 6 | 519 | 8 | 0 | 2008-08-20T15:26:28.973000 | 2008-08-20T15:32:22.313000 |
18,253 | 23,076 | Unit testing kernel drivers | I'm looking for a testing framework for the Windows kernel environment. So far, I've found cfix. Has any one tried it? Are there alternatives? | Being the author of cfix, I might be a little biased here -- but as a matter of fact, I am currently not aware of any other unit-testing framework for NT kernel mode. If you should experience any problems with cfix, feel free to contact me. | Unit testing kernel drivers I'm looking for a testing framework for the Windows kernel environment. So far, I've found cfix. Has any one tried it? Are there alternatives? | TITLE:
Unit testing kernel drivers
QUESTION:
I'm looking for a testing framework for the Windows kernel environment. So far, I've found cfix. Has any one tried it? Are there alternatives?
ANSWER:
Being the author of cfix, I might be a little biased here -- but as a matter of fact, I am currently not aware of any other unit-testing framework for NT kernel mode. If you should experience any problems with cfix, feel free to contact me. | [
"windows",
"unit-testing",
"kernel",
"drivers"
] | 9 | 7 | 4,166 | 2 | 0 | 2008-08-20T15:43:55.187000 | 2008-08-22T18:14:04.873000 |
18,265 | 18,406 | Getting stack traces on Unix systems, automatically | What methods are there for automatically getting a stack trace on Unix systems? I don't mean just getting a core file or attaching interactively with GDB, but having a SIGSEGV handler that dumps a backtrace to a text file. Bonus points for the following optional features: Extra information gathering at crash time (eg. config files). Email a crash info bundle to the developers. Ability to add this in a dlopen ed shared library Not requiring a GUI | If you are on systems with the BSD backtrace functionality available (Linux, OSX 1.5, BSD of course), you can do this programmatically in your signal handler. For example ( backtrace code derived from IBM example ): #include #include #include #include void sig_handler(int sig) { void * array[25]; int nSize = backtrace(array, 25); char ** symbols = backtrace_symbols(array, nSize);
for (int i = 0; i < nSize; i++) { puts(symbols[i]);; }
free(symbols);
signal(sig, &sig_handler); }
void h() { kill(0, SIGSEGV); }
void g() { h(); }
void f() { g(); }
int main(int argc, char ** argv) { signal(SIGSEGV, &sig_handler); f(); } Output: 0 a.out 0x00001f2d sig_handler + 35 1 libSystem.B.dylib 0x95f8f09b _sigtramp + 43 2??? 0xffffffff 0x0 + 4294967295 3 a.out 0x00001fb1 h + 26 4 a.out 0x00001fbe g + 11 5 a.out 0x00001fcb f + 11 6 a.out 0x00001ff5 main + 40 7 a.out 0x00001ede start + 54 This doesn't get bonus points for the optional features (except not requiring a GUI), however, it does have the advantage of being very simple, and not requiring any additional libraries or programs. | Getting stack traces on Unix systems, automatically What methods are there for automatically getting a stack trace on Unix systems? I don't mean just getting a core file or attaching interactively with GDB, but having a SIGSEGV handler that dumps a backtrace to a text file. Bonus points for the following optional features: Extra information gathering at crash time (eg. config files). Email a crash info bundle to the developers. Ability to add this in a dlopen ed shared library Not requiring a GUI | TITLE:
Getting stack traces on Unix systems, automatically
QUESTION:
What methods are there for automatically getting a stack trace on Unix systems? I don't mean just getting a core file or attaching interactively with GDB, but having a SIGSEGV handler that dumps a backtrace to a text file. Bonus points for the following optional features: Extra information gathering at crash time (eg. config files). Email a crash info bundle to the developers. Ability to add this in a dlopen ed shared library Not requiring a GUI
ANSWER:
If you are on systems with the BSD backtrace functionality available (Linux, OSX 1.5, BSD of course), you can do this programmatically in your signal handler. For example ( backtrace code derived from IBM example ): #include #include #include #include void sig_handler(int sig) { void * array[25]; int nSize = backtrace(array, 25); char ** symbols = backtrace_symbols(array, nSize);
for (int i = 0; i < nSize; i++) { puts(symbols[i]);; }
free(symbols);
signal(sig, &sig_handler); }
void h() { kill(0, SIGSEGV); }
void g() { h(); }
void f() { g(); }
int main(int argc, char ** argv) { signal(SIGSEGV, &sig_handler); f(); } Output: 0 a.out 0x00001f2d sig_handler + 35 1 libSystem.B.dylib 0x95f8f09b _sigtramp + 43 2??? 0xffffffff 0x0 + 4294967295 3 a.out 0x00001fb1 h + 26 4 a.out 0x00001fbe g + 11 5 a.out 0x00001fcb f + 11 6 a.out 0x00001ff5 main + 40 7 a.out 0x00001ede start + 54 This doesn't get bonus points for the optional features (except not requiring a GUI), however, it does have the advantage of being very simple, and not requiring any additional libraries or programs. | [
"linux",
"unix",
"stack-trace",
"segmentation-fault"
] | 7 | 7 | 10,001 | 4 | 0 | 2008-08-20T15:50:52.283000 | 2008-08-20T17:36:55.697000 |
18,272 | 18,336 | Recommend a tool to manage Extended Properties in SQL server 2005 | Server Management Studio tends to be a bit un-intuitive when it comes to managing Extended Properties, so can anyone recommend a decent tool that improves the situation. One thing I would like to do is to have templates that I can apply objects, thus standardising the nomenclature and content of the properties applied to objects. | Take a look at Data Dictionary Creator, an open source tool I wrote to make it easier to edit extended properties. It includes the ability to export the information in a variety of formats, as well. http://www.codeplex.com/datadictionary | Recommend a tool to manage Extended Properties in SQL server 2005 Server Management Studio tends to be a bit un-intuitive when it comes to managing Extended Properties, so can anyone recommend a decent tool that improves the situation. One thing I would like to do is to have templates that I can apply objects, thus standardising the nomenclature and content of the properties applied to objects. | TITLE:
Recommend a tool to manage Extended Properties in SQL server 2005
QUESTION:
Server Management Studio tends to be a bit un-intuitive when it comes to managing Extended Properties, so can anyone recommend a decent tool that improves the situation. One thing I would like to do is to have templates that I can apply objects, thus standardising the nomenclature and content of the properties applied to objects.
ANSWER:
Take a look at Data Dictionary Creator, an open source tool I wrote to make it easier to edit extended properties. It includes the ability to export the information in a variety of formats, as well. http://www.codeplex.com/datadictionary | [
"sql-server",
"extended-properties"
] | 2 | 5 | 2,248 | 2 | 0 | 2008-08-20T15:58:40.783000 | 2008-08-20T16:36:26.717000 |
18,284 | 18,307 | Best Way to Begin Learning Web Application Design | I'm a long time hobbyist programmer interested in getting into web application development. I have a fair amount of personal experience with various non-web languages, but have never really branched over to web applications. I don't usually have any issues learning new languages or technologies, so I'm not worried about which is the "best" language or web stack to work with. Instead, I'd like to know of any recommended resources (books, articles, web sites, maybe even college courses) that discuss web application design: managing and optimizing server interaction, security concerns, scalability, and other topics that fall under design rather than implementation. What would you recommend for a Standalone Application Developer wanting to branch out into Web Development? | There is a wide variety of web application languages you could get into. The ones I have most experience with (and therefore will be talking about here) are PHP, eRuby and Ruby on Rails. All of these have good tutorials available on the internet - I'll link to some of them below. Which to choose depends on exactly what you're looking to do. Using PHP and eRuby you have to do most things yourself - whereas Ruby on Rails will do lots of stuff for you (useful, but can also be dangerous if you don't know what you're doing). Ruby on Rails is good for doing database related things - for example the standard CRUD (Create, Read, Update, Delete) application. The standard kind of app Ruby on Rails (often abbreviated to RoR) tutorials teach you is a blog application (Create entries, Read entries, Update entries, Delete entries) or an Address Book Application. It is possible to do many of these sort of applications almost in one line of code - using RoR's 'scaffold' function. PHP and eRuby make you do more of the work yourself - but this can be better in some situations. PHP is more well known and used than eRuby, but I like the Ruby language so I tend to like using eRuby. These are both good for doing simple applications (like contact forms on websites) or more complex applications (phpBB - a piece of forum software is written in php). As for which one to choose - I'd have a play with them and see what you think. Try running through the first few bits of a tutorial with each and see how whether you like it or not. Here come the links to various tutorials: PHP PHP 101 PHP Intro from W3Schools eRuby Beginning eRuby - not great, but shows you how you can embed it in HTML Try Ruby in your Browser - helps you learn Ruby which you need to know for eRuby Ruby on Rails Rolling with Ruby on Rails - the latest 'revisited' version for the latest version of RoR Rolling with Ruby on Rails part 2 There are a few tutorials to get you started. Some of these take you through installing the necessary software (webserver and anything else needed - eg. php or ruby) and some don't. A good way to get Apache (webserver), MySQL (db) and PHP installed on windows is to use XAMPP. If you're on linux then apache, mysql and php will be in your package repositories and there may be distro specific guides to setting them up. | Best Way to Begin Learning Web Application Design I'm a long time hobbyist programmer interested in getting into web application development. I have a fair amount of personal experience with various non-web languages, but have never really branched over to web applications. I don't usually have any issues learning new languages or technologies, so I'm not worried about which is the "best" language or web stack to work with. Instead, I'd like to know of any recommended resources (books, articles, web sites, maybe even college courses) that discuss web application design: managing and optimizing server interaction, security concerns, scalability, and other topics that fall under design rather than implementation. What would you recommend for a Standalone Application Developer wanting to branch out into Web Development? | TITLE:
Best Way to Begin Learning Web Application Design
QUESTION:
I'm a long time hobbyist programmer interested in getting into web application development. I have a fair amount of personal experience with various non-web languages, but have never really branched over to web applications. I don't usually have any issues learning new languages or technologies, so I'm not worried about which is the "best" language or web stack to work with. Instead, I'd like to know of any recommended resources (books, articles, web sites, maybe even college courses) that discuss web application design: managing and optimizing server interaction, security concerns, scalability, and other topics that fall under design rather than implementation. What would you recommend for a Standalone Application Developer wanting to branch out into Web Development?
ANSWER:
There is a wide variety of web application languages you could get into. The ones I have most experience with (and therefore will be talking about here) are PHP, eRuby and Ruby on Rails. All of these have good tutorials available on the internet - I'll link to some of them below. Which to choose depends on exactly what you're looking to do. Using PHP and eRuby you have to do most things yourself - whereas Ruby on Rails will do lots of stuff for you (useful, but can also be dangerous if you don't know what you're doing). Ruby on Rails is good for doing database related things - for example the standard CRUD (Create, Read, Update, Delete) application. The standard kind of app Ruby on Rails (often abbreviated to RoR) tutorials teach you is a blog application (Create entries, Read entries, Update entries, Delete entries) or an Address Book Application. It is possible to do many of these sort of applications almost in one line of code - using RoR's 'scaffold' function. PHP and eRuby make you do more of the work yourself - but this can be better in some situations. PHP is more well known and used than eRuby, but I like the Ruby language so I tend to like using eRuby. These are both good for doing simple applications (like contact forms on websites) or more complex applications (phpBB - a piece of forum software is written in php). As for which one to choose - I'd have a play with them and see what you think. Try running through the first few bits of a tutorial with each and see how whether you like it or not. Here come the links to various tutorials: PHP PHP 101 PHP Intro from W3Schools eRuby Beginning eRuby - not great, but shows you how you can embed it in HTML Try Ruby in your Browser - helps you learn Ruby which you need to know for eRuby Ruby on Rails Rolling with Ruby on Rails - the latest 'revisited' version for the latest version of RoR Rolling with Ruby on Rails part 2 There are a few tutorials to get you started. Some of these take you through installing the necessary software (webserver and anything else needed - eg. php or ruby) and some don't. A good way to get Apache (webserver), MySQL (db) and PHP installed on windows is to use XAMPP. If you're on linux then apache, mysql and php will be in your package repositories and there may be distro specific guides to setting them up. | [
"language-agnostic",
"web-applications",
"resources"
] | 10 | 10 | 8,036 | 3 | 0 | 2008-08-20T16:08:43.100000 | 2008-08-20T16:21:13.583000 |
18,290 | 18,344 | How Do You Secure database.yml? | Within Ruby on Rails applications database.yml is a plain text file that stores database credentials. When I deploy my Rails applications I have an after deploy callback in my Capistrano recipe that creates a symbolic link within the application's /config directory to the database.yml file. The file itself is stored in a separate directory that's outside the standard Capistrano /releases directory structure. I chmod 400 the file so it's only readable by the user who created it. Is this sufficient to lock it down? If not, what else do you do? Is anyone encrypting their database.yml files? | You'll also want to make sure that your SSH system is well secured to prevent people from logging in as your Capistrano bot. I'd suggest restricting access to password-protected key pairs. Encrypting the.yml file on the server is useless since you have to give the bot the key, which would be stored... on the same server. Encrypting it on your machine is probably a good idea. Capistrano can decrypt it before sending. | How Do You Secure database.yml? Within Ruby on Rails applications database.yml is a plain text file that stores database credentials. When I deploy my Rails applications I have an after deploy callback in my Capistrano recipe that creates a symbolic link within the application's /config directory to the database.yml file. The file itself is stored in a separate directory that's outside the standard Capistrano /releases directory structure. I chmod 400 the file so it's only readable by the user who created it. Is this sufficient to lock it down? If not, what else do you do? Is anyone encrypting their database.yml files? | TITLE:
How Do You Secure database.yml?
QUESTION:
Within Ruby on Rails applications database.yml is a plain text file that stores database credentials. When I deploy my Rails applications I have an after deploy callback in my Capistrano recipe that creates a symbolic link within the application's /config directory to the database.yml file. The file itself is stored in a separate directory that's outside the standard Capistrano /releases directory structure. I chmod 400 the file so it's only readable by the user who created it. Is this sufficient to lock it down? If not, what else do you do? Is anyone encrypting their database.yml files?
ANSWER:
You'll also want to make sure that your SSH system is well secured to prevent people from logging in as your Capistrano bot. I'd suggest restricting access to password-protected key pairs. Encrypting the.yml file on the server is useless since you have to give the bot the key, which would be stored... on the same server. Encrypting it on your machine is probably a good idea. Capistrano can decrypt it before sending. | [
"ruby-on-rails",
"security",
"deployment"
] | 51 | 12 | 22,956 | 6 | 0 | 2008-08-20T16:12:46.850000 | 2008-08-20T16:41:15.220000 |
18,291 | 18,516 | Unit testing in Delphi - how are you doing it? | I'm wondering how the few Delphi Programming users here are doing unit testing, if any? Is there anything that integrates with the Delphi IDE that you've found works well? If not, what tools are you using and do you have or know of example mini-projects that demonstrate how it all works? Update: I forgot to mention that I'm using Borland Developer Studio 2006 Pro, though I occasionally drop into Delphi 7, and of course others may be using other versions. | DUnit is a xUnit type of unit testing framework to be used with win32 Delphi. Since Delphi 2005 DUnit is integrated to a certan point into the IDE. Other DUnit integration tools for the Delphi IDE can be found here. DUnit comes with documentation with examples. | Unit testing in Delphi - how are you doing it? I'm wondering how the few Delphi Programming users here are doing unit testing, if any? Is there anything that integrates with the Delphi IDE that you've found works well? If not, what tools are you using and do you have or know of example mini-projects that demonstrate how it all works? Update: I forgot to mention that I'm using Borland Developer Studio 2006 Pro, though I occasionally drop into Delphi 7, and of course others may be using other versions. | TITLE:
Unit testing in Delphi - how are you doing it?
QUESTION:
I'm wondering how the few Delphi Programming users here are doing unit testing, if any? Is there anything that integrates with the Delphi IDE that you've found works well? If not, what tools are you using and do you have or know of example mini-projects that demonstrate how it all works? Update: I forgot to mention that I'm using Borland Developer Studio 2006 Pro, though I occasionally drop into Delphi 7, and of course others may be using other versions.
ANSWER:
DUnit is a xUnit type of unit testing framework to be used with win32 Delphi. Since Delphi 2005 DUnit is integrated to a certan point into the IDE. Other DUnit integration tools for the Delphi IDE can be found here. DUnit comes with documentation with examples. | [
"delphi",
"unit-testing",
"delphi-7",
"delphi-2006"
] | 60 | 40 | 28,591 | 9 | 0 | 2008-08-20T16:12:50.703000 | 2008-08-20T18:26:09.400000 |
18,292 | 18,370 | What are some good SSH Servers for windows? | Trying to setup an SSH server on Windows Server 2003. What are some good ones? Preferably open source. I plan on using WinSCP as a client so a server which supports the advanced features implemented by that client would be great. | I've been using Bitvise SSH Server and it's really great. From install to administration it does it all through a GUI so you won't be putting together a sshd_config file. Plus if you use their client, Tunnelier, you get some bonus features (like mapping shares, port forwarding setup up server side, etc.) If you don't use their client it will still work with the Open Source SSH clients. It's not Open Source and it costs $39.95, but I think it's worth it. UPDATE 2009-05-21 11:10: The pricing has changed. The current price is $99.95 per install for commercial, but now free for non-commercial/personal use. Here is the current pricing. | What are some good SSH Servers for windows? Trying to setup an SSH server on Windows Server 2003. What are some good ones? Preferably open source. I plan on using WinSCP as a client so a server which supports the advanced features implemented by that client would be great. | TITLE:
What are some good SSH Servers for windows?
QUESTION:
Trying to setup an SSH server on Windows Server 2003. What are some good ones? Preferably open source. I plan on using WinSCP as a client so a server which supports the advanced features implemented by that client would be great.
ANSWER:
I've been using Bitvise SSH Server and it's really great. From install to administration it does it all through a GUI so you won't be putting together a sshd_config file. Plus if you use their client, Tunnelier, you get some bonus features (like mapping shares, port forwarding setup up server side, etc.) If you don't use their client it will still work with the Open Source SSH clients. It's not Open Source and it costs $39.95, but I think it's worth it. UPDATE 2009-05-21 11:10: The pricing has changed. The current price is $99.95 per install for commercial, but now free for non-commercial/personal use. Here is the current pricing. | [
"windows",
"deployment",
"ssh",
"winscp"
] | 76 | 51 | 109,634 | 7 | 0 | 2008-08-20T16:13:39.937000 | 2008-08-20T16:58:05.320000 |
18,298 | 18,322 | How do I change the title bar icon in Adobe AIR? | I cannot figure out how to change the title bar icon (the icon in the furthest top left corner of the application) in Adobe AIR. It is currently displaying the default 'Adobe AIR' red icon. I have been able to change it in the system tray, however. | Does the following help? http://groups.google.com/group/chennai-flex-user-group/browse_thread/thread/cffb9ab56450c28e | How do I change the title bar icon in Adobe AIR? I cannot figure out how to change the title bar icon (the icon in the furthest top left corner of the application) in Adobe AIR. It is currently displaying the default 'Adobe AIR' red icon. I have been able to change it in the system tray, however. | TITLE:
How do I change the title bar icon in Adobe AIR?
QUESTION:
I cannot figure out how to change the title bar icon (the icon in the furthest top left corner of the application) in Adobe AIR. It is currently displaying the default 'Adobe AIR' red icon. I have been able to change it in the system tray, however.
ANSWER:
Does the following help? http://groups.google.com/group/chennai-flex-user-group/browse_thread/thread/cffb9ab56450c28e | [
"apache-flex",
"air"
] | 1 | 2 | 5,923 | 2 | 0 | 2008-08-20T16:15:55.953000 | 2008-08-20T16:29:50.897000 |
18,305 | 18,315 | Web server farms with IIS ? Basic Infos | Can somebody point me to a resource that explains how to go about having 2+ IIS web server clustered (or Webfarm not sure what its called)? All I need is something basic, an overview how and where to start. Can't seem to find anything... | What you're after is called Load Balancing. http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/0baca8b1-73b9-4cd2-ab9c-654d88d05b4f.mspx?mfr=true There's a very good book on the topic: http://www.amazon.co.uk/Windows-Clustering-Balancing-Osborne-Networking/dp/0072226226/ref=sr_1_1?ie=UTF8&s=books&qid=1219249588&sr=8-1 | Web server farms with IIS ? Basic Infos Can somebody point me to a resource that explains how to go about having 2+ IIS web server clustered (or Webfarm not sure what its called)? All I need is something basic, an overview how and where to start. Can't seem to find anything... | TITLE:
Web server farms with IIS ? Basic Infos
QUESTION:
Can somebody point me to a resource that explains how to go about having 2+ IIS web server clustered (or Webfarm not sure what its called)? All I need is something basic, an overview how and where to start. Can't seem to find anything...
ANSWER:
What you're after is called Load Balancing. http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/0baca8b1-73b9-4cd2-ab9c-654d88d05b4f.mspx?mfr=true There's a very good book on the topic: http://www.amazon.co.uk/Windows-Clustering-Balancing-Osborne-Networking/dp/0072226226/ref=sr_1_1?ie=UTF8&s=books&qid=1219249588&sr=8-1 | [
"iis",
"webserver",
"web-farm"
] | 3 | 0 | 3,871 | 5 | 0 | 2008-08-20T16:21:05.613000 | 2008-08-20T16:27:04.087000 |
18,348 | 18,352 | I would like some tips for debugging WCF Web Service exceptions | I've created a WCF service and when I browse to the endpoint I get the following fault: a:ActionNotSupported The message with Action '' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). I've fixed the problem but didn't enjoy the experience! Does anyone have any tips or tools for debugging problems like this? | I've found SvcTraceViewer.exe to be the most valuable tool when it comes to diagnosing WCF errors. | I would like some tips for debugging WCF Web Service exceptions I've created a WCF service and when I browse to the endpoint I get the following fault: a:ActionNotSupported The message with Action '' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). I've fixed the problem but didn't enjoy the experience! Does anyone have any tips or tools for debugging problems like this? | TITLE:
I would like some tips for debugging WCF Web Service exceptions
QUESTION:
I've created a WCF service and when I browse to the endpoint I get the following fault: a:ActionNotSupported The message with Action '' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). I've fixed the problem but didn't enjoy the experience! Does anyone have any tips or tools for debugging problems like this?
ANSWER:
I've found SvcTraceViewer.exe to be the most valuable tool when it comes to diagnosing WCF errors. | [
".net",
"wcf",
"web-services"
] | 2 | 9 | 4,268 | 1 | 0 | 2008-08-20T16:44:04.170000 | 2008-08-20T16:47:01.590000 |
18,391 | 18,396 | How would I get started writing my own firewall? | There is previous little on the google on this subject other than people asking this very same question. How would I get started writing my own firewall? I'm looking to write one for the windows platform but I would also be interested in this information for other operating systems too. | For Windows 2000/XP there is an article with examples on CodeProject Developing Firewalls for Windows 2000/XP For Vista I think you will need to use Windows Filtering Platform | How would I get started writing my own firewall? There is previous little on the google on this subject other than people asking this very same question. How would I get started writing my own firewall? I'm looking to write one for the windows platform but I would also be interested in this information for other operating systems too. | TITLE:
How would I get started writing my own firewall?
QUESTION:
There is previous little on the google on this subject other than people asking this very same question. How would I get started writing my own firewall? I'm looking to write one for the windows platform but I would also be interested in this information for other operating systems too.
ANSWER:
For Windows 2000/XP there is an article with examples on CodeProject Developing Firewalls for Windows 2000/XP For Vista I think you will need to use Windows Filtering Platform | [
"language-agnostic",
"firewall"
] | 11 | 11 | 9,160 | 3 | 0 | 2008-08-20T17:25:03.197000 | 2008-08-20T17:30:31.470000 |
18,413 | 18,471 | Get a number from a sql string range | I have a column of data that contains a percentage range as a string that I'd like to convert to a number so I can do easy comparisons. Possible values in the string: '<5%' '5-10%' '10-15%'... '95-100%' I'd like to convert this in my select where clause to just the first number, 5, 10, 15, etc. so that I can compare that value to a passed in "at least this" value. I've tried a bunch of variations on substring, charindex, convert, and replace, but I still can't seem to get something that works in all combinations. Any ideas? | Try this, SELECT substring(replace(interest, '<',''), patindex('%[0-9]%',replace(interest, '<','')), patindex('%[^0-9]%',replace(interest, '<',''))-1) FROM table1 Tested at my end and it works, it's only my first try so you might be able to optimise it. | Get a number from a sql string range I have a column of data that contains a percentage range as a string that I'd like to convert to a number so I can do easy comparisons. Possible values in the string: '<5%' '5-10%' '10-15%'... '95-100%' I'd like to convert this in my select where clause to just the first number, 5, 10, 15, etc. so that I can compare that value to a passed in "at least this" value. I've tried a bunch of variations on substring, charindex, convert, and replace, but I still can't seem to get something that works in all combinations. Any ideas? | TITLE:
Get a number from a sql string range
QUESTION:
I have a column of data that contains a percentage range as a string that I'd like to convert to a number so I can do easy comparisons. Possible values in the string: '<5%' '5-10%' '10-15%'... '95-100%' I'd like to convert this in my select where clause to just the first number, 5, 10, 15, etc. so that I can compare that value to a passed in "at least this" value. I've tried a bunch of variations on substring, charindex, convert, and replace, but I still can't seem to get something that works in all combinations. Any ideas?
ANSWER:
Try this, SELECT substring(replace(interest, '<',''), patindex('%[0-9]%',replace(interest, '<','')), patindex('%[^0-9]%',replace(interest, '<',''))-1) FROM table1 Tested at my end and it works, it's only my first try so you might be able to optimise it. | [
"sql-server"
] | 2 | 5 | 15,422 | 8 | 0 | 2008-08-20T17:39:52.040000 | 2008-08-20T18:09:39.160000 |
18,418 | 18,435 | Elegant way to remove items from sequence in Python? | When I am writing code in Python, I often need to remove items from a list or other sequence type based on some criteria. I haven't found a solution that is elegant and efficient, as removing items from a list you are currently iterating through is bad. For example, you can't do this: for name in names: if name[-5:] == 'Smith': names.remove(name) I usually end up doing something like this: toremove = [] for name in names: if name[-5:] == 'Smith': toremove.append(name) for name in toremove: names.remove(name) del toremove This is innefficient, fairly ugly and possibly buggy (how does it handle multiple 'John Smith' entries?). Does anyone have a more elegant solution, or at least a more efficient one? How about one that works with dictionaries? | Two easy ways to accomplish just the filtering are: Using filter: names = filter(lambda name: name[-5:]!= "Smith", names) Using list comprehensions: names = [name for name in names if name[-5:]!= "Smith"] Note that both cases keep the values for which the predicate function evaluates to True, so you have to reverse the logic (i.e. you say "keep the people who do not have the last name Smith" instead of "remove the people who have the last name Smith"). Edit Funny... two people individually posted both of the answers I suggested as I was posting mine. | Elegant way to remove items from sequence in Python? When I am writing code in Python, I often need to remove items from a list or other sequence type based on some criteria. I haven't found a solution that is elegant and efficient, as removing items from a list you are currently iterating through is bad. For example, you can't do this: for name in names: if name[-5:] == 'Smith': names.remove(name) I usually end up doing something like this: toremove = [] for name in names: if name[-5:] == 'Smith': toremove.append(name) for name in toremove: names.remove(name) del toremove This is innefficient, fairly ugly and possibly buggy (how does it handle multiple 'John Smith' entries?). Does anyone have a more elegant solution, or at least a more efficient one? How about one that works with dictionaries? | TITLE:
Elegant way to remove items from sequence in Python?
QUESTION:
When I am writing code in Python, I often need to remove items from a list or other sequence type based on some criteria. I haven't found a solution that is elegant and efficient, as removing items from a list you are currently iterating through is bad. For example, you can't do this: for name in names: if name[-5:] == 'Smith': names.remove(name) I usually end up doing something like this: toremove = [] for name in names: if name[-5:] == 'Smith': toremove.append(name) for name in toremove: names.remove(name) del toremove This is innefficient, fairly ugly and possibly buggy (how does it handle multiple 'John Smith' entries?). Does anyone have a more elegant solution, or at least a more efficient one? How about one that works with dictionaries?
ANSWER:
Two easy ways to accomplish just the filtering are: Using filter: names = filter(lambda name: name[-5:]!= "Smith", names) Using list comprehensions: names = [name for name in names if name[-5:]!= "Smith"] Note that both cases keep the values for which the predicate function evaluates to True, so you have to reverse the logic (i.e. you say "keep the people who do not have the last name Smith" instead of "remove the people who have the last name Smith"). Edit Funny... two people individually posted both of the answers I suggested as I was posting mine. | [
"python",
"optimization",
"set",
"series"
] | 57 | 56 | 46,886 | 14 | 0 | 2008-08-20T17:41:24.377000 | 2008-08-20T17:50:47.003000 |
18,421 | 18,456 | Best way to bind Windows Forms properties to ApplicationSettings in C#? | In a desktop application needing some serious re-factoring, I have several chunks of code that look like this: private void LoadSettings() { WindowState = Properties.Settings.Default.WindowState; Location = Properties.Settings.Default.WindowLocation;... }
private void SaveSettings() { Properties.Settings.Default.WindowState = WindowState; Properties.Settings.Default.WindowLocation = Location;... } What's the best way to replace this? Project-imposed constraints: Visual Studio 2005 C# /.NET 2.0 Windows Forms Update For posterity, I've also found two useful tutorials: "Windows Forms User Settings in C#" and "Exploring Secrets of Persistent Application Settings". I've asked a follow-up question about using this technique to bind a form's Size here. I separated them out to help people who search for similar issues. | If you open your windows form in the designer, look in the properties box. The first item should be "(ApplicationSetting)". Under that is "(PropertyBinding)". That's where you'll find the option to do exactly what you want. | Best way to bind Windows Forms properties to ApplicationSettings in C#? In a desktop application needing some serious re-factoring, I have several chunks of code that look like this: private void LoadSettings() { WindowState = Properties.Settings.Default.WindowState; Location = Properties.Settings.Default.WindowLocation;... }
private void SaveSettings() { Properties.Settings.Default.WindowState = WindowState; Properties.Settings.Default.WindowLocation = Location;... } What's the best way to replace this? Project-imposed constraints: Visual Studio 2005 C# /.NET 2.0 Windows Forms Update For posterity, I've also found two useful tutorials: "Windows Forms User Settings in C#" and "Exploring Secrets of Persistent Application Settings". I've asked a follow-up question about using this technique to bind a form's Size here. I separated them out to help people who search for similar issues. | TITLE:
Best way to bind Windows Forms properties to ApplicationSettings in C#?
QUESTION:
In a desktop application needing some serious re-factoring, I have several chunks of code that look like this: private void LoadSettings() { WindowState = Properties.Settings.Default.WindowState; Location = Properties.Settings.Default.WindowLocation;... }
private void SaveSettings() { Properties.Settings.Default.WindowState = WindowState; Properties.Settings.Default.WindowLocation = Location;... } What's the best way to replace this? Project-imposed constraints: Visual Studio 2005 C# /.NET 2.0 Windows Forms Update For posterity, I've also found two useful tutorials: "Windows Forms User Settings in C#" and "Exploring Secrets of Persistent Application Settings". I've asked a follow-up question about using this technique to bind a form's Size here. I separated them out to help people who search for similar issues.
ANSWER:
If you open your windows form in the designer, look in the properties box. The first item should be "(ApplicationSetting)". Under that is "(PropertyBinding)". That's where you'll find the option to do exactly what you want. | [
"c#",
".net"
] | 16 | 12 | 4,807 | 1 | 0 | 2008-08-20T17:42:21.457000 | 2008-08-20T18:02:14.637000 |
18,432 | 18,502 | I can share a SQL Server Reporting Services Data SOURCE... what about a Data SET? | I am developing a Reporting Services solution for a DOD website. Frequently I'll have a report and want to have as a parameter the Service (in addition to other similar mundane, but repetitive parameters like Fiscal Year, Data Effective Date, etc). Basically everything I've seen of SSRS 2005 says it can't be done... but I personally refuse to believe that MS would be so stupid/naive/short-sited to leave something like sharing datasets out of reporting entirely. Is there a clunky (or not so clunky way) to share datasets and still keep the reporting server happy? Will SSRS2008 do this? EDIT: I guess I worded that unclearly. I have a stack of reports. Since I'm in a DoD environment, one common parameter for these reports is Service (army, navy, etc. for those non US users). Since "Business rules" cause me to not be able to use stored procedures; is there a way I can make 1 dataset and link to it from the various reports? Will Reporting 2008 support something like this? I'm getting sick of re-typing the same query in a bunch of reports. | I am not clear if you need to share a dataset, since you have some SQL results that you need to use twice, and don't want to re-compute the same data twice, or you want to do something regarding parameters. So with this "I didn't really understand the question" preface... You cannot share a dataset. Meaning, you can't, lets say, have a dataset returning table A, and in dataset B try to join with A. If this is really what you want to do, you could use temporary tables to store A and then in dataset B use the temporary table. There are best practices around that, but since I am not sure this is what you need, I won't spend time talking about that right now. | I can share a SQL Server Reporting Services Data SOURCE... what about a Data SET? I am developing a Reporting Services solution for a DOD website. Frequently I'll have a report and want to have as a parameter the Service (in addition to other similar mundane, but repetitive parameters like Fiscal Year, Data Effective Date, etc). Basically everything I've seen of SSRS 2005 says it can't be done... but I personally refuse to believe that MS would be so stupid/naive/short-sited to leave something like sharing datasets out of reporting entirely. Is there a clunky (or not so clunky way) to share datasets and still keep the reporting server happy? Will SSRS2008 do this? EDIT: I guess I worded that unclearly. I have a stack of reports. Since I'm in a DoD environment, one common parameter for these reports is Service (army, navy, etc. for those non US users). Since "Business rules" cause me to not be able to use stored procedures; is there a way I can make 1 dataset and link to it from the various reports? Will Reporting 2008 support something like this? I'm getting sick of re-typing the same query in a bunch of reports. | TITLE:
I can share a SQL Server Reporting Services Data SOURCE... what about a Data SET?
QUESTION:
I am developing a Reporting Services solution for a DOD website. Frequently I'll have a report and want to have as a parameter the Service (in addition to other similar mundane, but repetitive parameters like Fiscal Year, Data Effective Date, etc). Basically everything I've seen of SSRS 2005 says it can't be done... but I personally refuse to believe that MS would be so stupid/naive/short-sited to leave something like sharing datasets out of reporting entirely. Is there a clunky (or not so clunky way) to share datasets and still keep the reporting server happy? Will SSRS2008 do this? EDIT: I guess I worded that unclearly. I have a stack of reports. Since I'm in a DoD environment, one common parameter for these reports is Service (army, navy, etc. for those non US users). Since "Business rules" cause me to not be able to use stored procedures; is there a way I can make 1 dataset and link to it from the various reports? Will Reporting 2008 support something like this? I'm getting sick of re-typing the same query in a bunch of reports.
ANSWER:
I am not clear if you need to share a dataset, since you have some SQL results that you need to use twice, and don't want to re-compute the same data twice, or you want to do something regarding parameters. So with this "I didn't really understand the question" preface... You cannot share a dataset. Meaning, you can't, lets say, have a dataset returning table A, and in dataset B try to join with A. If this is really what you want to do, you could use temporary tables to store A and then in dataset B use the temporary table. There are best practices around that, but since I am not sure this is what you need, I won't spend time talking about that right now. | [
"reporting-services"
] | 3 | 1 | 1,477 | 2 | 0 | 2008-08-20T17:49:18.163000 | 2008-08-20T18:22:11.453000 |
18,448 | 18,484 | Encryption in C# Web-Services | I'm looking for a simple way to encrypt my soap communication in my C# Web-Service. I was looking into WSE 3.0 but it seems Microsoft dropped support for it, and therefore it's not straightforward to use. It seems WCF could've been an option but I prefer not to upgrade from.NET 2.0. Any simple, straightforward encryption method? | I think this can help; last year we used this to compress the webservices and it performed very well, I believe it could be enhanced with encryption classes; Creating Custom SOAP Extensions - Compression Extension | Encryption in C# Web-Services I'm looking for a simple way to encrypt my soap communication in my C# Web-Service. I was looking into WSE 3.0 but it seems Microsoft dropped support for it, and therefore it's not straightforward to use. It seems WCF could've been an option but I prefer not to upgrade from.NET 2.0. Any simple, straightforward encryption method? | TITLE:
Encryption in C# Web-Services
QUESTION:
I'm looking for a simple way to encrypt my soap communication in my C# Web-Service. I was looking into WSE 3.0 but it seems Microsoft dropped support for it, and therefore it's not straightforward to use. It seems WCF could've been an option but I prefer not to upgrade from.NET 2.0. Any simple, straightforward encryption method?
ANSWER:
I think this can help; last year we used this to compress the webservices and it performed very well, I believe it could be enhanced with encryption classes; Creating Custom SOAP Extensions - Compression Extension | [
"c#",
"web-services",
"security",
"encryption"
] | 18 | 9 | 11,583 | 6 | 0 | 2008-08-20T17:55:34.247000 | 2008-08-20T18:15:46.133000 |
18,449 | 18,607 | Dealing with PHP server and MySQL server in different time zones | For those of us who use standard shared hosting packages, such as GoDaddy or Network Solutions, how do you handle datetime conversions when your hosting server (PHP) and MySQL server are in different time zones? Also, does anybody have some best practice advice for determining what time zone a visitor to your site is in and manipulating a datetime variable appropriately? | As of PHP 5.1.0 you can use date_default_timezone_set() function to set the default timezone used by all date/time functions in a script. For MySql (quoted from MySQL Server Time Zone Support page) Before MySQL 4.1.3, the server operates only in the system time zone set at startup. Beginning with MySQL 4.1.3, the server maintains several time zone settings, some of which can be modified at runtime. Of interest to you is per-connection setting of the time zones, which you would use at the beginning of your scripts SET timezone = 'Europe/London'; As for detecting the client timezone setting, you could use a bit of JavaScript to get and save that information to a cookie, and use it on subsequent page reads, to calculate the proper timezone. //Returns the offset (time difference) between Greenwich Mean Time (GMT) //and local time of Date object, in minutes. var offset = new Date().getTimezoneOffset(); document.cookie = 'timezoneOffset=' + escape(offset); Or you could offer users the chioce to set their time zones themselves. | Dealing with PHP server and MySQL server in different time zones For those of us who use standard shared hosting packages, such as GoDaddy or Network Solutions, how do you handle datetime conversions when your hosting server (PHP) and MySQL server are in different time zones? Also, does anybody have some best practice advice for determining what time zone a visitor to your site is in and manipulating a datetime variable appropriately? | TITLE:
Dealing with PHP server and MySQL server in different time zones
QUESTION:
For those of us who use standard shared hosting packages, such as GoDaddy or Network Solutions, how do you handle datetime conversions when your hosting server (PHP) and MySQL server are in different time zones? Also, does anybody have some best practice advice for determining what time zone a visitor to your site is in and manipulating a datetime variable appropriately?
ANSWER:
As of PHP 5.1.0 you can use date_default_timezone_set() function to set the default timezone used by all date/time functions in a script. For MySql (quoted from MySQL Server Time Zone Support page) Before MySQL 4.1.3, the server operates only in the system time zone set at startup. Beginning with MySQL 4.1.3, the server maintains several time zone settings, some of which can be modified at runtime. Of interest to you is per-connection setting of the time zones, which you would use at the beginning of your scripts SET timezone = 'Europe/London'; As for detecting the client timezone setting, you could use a bit of JavaScript to get and save that information to a cookie, and use it on subsequent page reads, to calculate the proper timezone. //Returns the offset (time difference) between Greenwich Mean Time (GMT) //and local time of Date object, in minutes. var offset = new Date().getTimezoneOffset(); document.cookie = 'timezoneOffset=' + escape(offset); Or you could offer users the chioce to set their time zones themselves. | [
"php",
"mysql",
"datetime",
"date",
"timezone"
] | 12 | 18 | 7,921 | 5 | 0 | 2008-08-20T17:55:35.323000 | 2008-08-20T19:11:36.513000 |
18,450 | 93,952 | Is Mono ready for prime time? | Has anyone used Mono, the open source.NET implementation on a large or medium sized project? I'm wondering if it's ready for real world, production environments. Is it stable, fast, compatible,... enough to use? Does it take a lot of effort to port projects to the Mono runtime, or is it really, really compatible enough to just take of and run already written code for Microsoft's runtime? | There are a couple of scenarios to consider: (a) if you are porting an existing application and wondering if Mono is good enough for this task; (b) you are starting to write some new code, and you want to know if Mono is mature enough. For the first case, you can use the Mono Migration Analyzer tool (Moma) to evaluate how far your application is from running on Mono. If the evaluation comes back with flying colors, you should start on your testing and QA and get ready to ship. If your evaluation comes back with a report highlighting features that are missing or differ significantly in their semantics in Mono you will have to evaluate whether the code can be adapted, rewritten or in the worst case whether your application can work with reduced functionality. According to our Moma statistics based on user submissions (this is from memory) about 50% of the applications work out of the box, about 25% require about a week worth of work (refactoring, adapting) another 15% require a serious commitment to redo chunks of your code, and the rest is just not worth bothering porting since they are so incredibly tied to Win32. At that point, either you start from zero, or a business decision will drive the effort to make your code portable, but we are talking months worth of work (at least from the reports we have). If you are starting from scratch, the situation is a lot simpler, because you will only be using the APIs that are present in Mono. As long as you stay with the supported stack (which is pretty much.NET 2.0, plus all the core upgrades in 3.5 including LINQ and System.Core, plus any of the Mono cross-platform APIs) you will be fine. Every once in a while you might run into bugs in Mono or limitations, and you might have to work around them, but that is not different than any other system. As for portability: ASP.NET applications are the easier ones to port, as those have little to no dependencies on Win32 and you can even use SQL server or other popular databases (there are plenty of bundled database providers with Mono). Windows.Forms porting is sometimes trickier because developers like to escape the.NET sandbox and P/Invoke their brains out to configure things as useful as the changing the cursor blinking rate expressed as two bezier points encoded in BCD form in a wParam. Or some junk like that. | Is Mono ready for prime time? Has anyone used Mono, the open source.NET implementation on a large or medium sized project? I'm wondering if it's ready for real world, production environments. Is it stable, fast, compatible,... enough to use? Does it take a lot of effort to port projects to the Mono runtime, or is it really, really compatible enough to just take of and run already written code for Microsoft's runtime? | TITLE:
Is Mono ready for prime time?
QUESTION:
Has anyone used Mono, the open source.NET implementation on a large or medium sized project? I'm wondering if it's ready for real world, production environments. Is it stable, fast, compatible,... enough to use? Does it take a lot of effort to port projects to the Mono runtime, or is it really, really compatible enough to just take of and run already written code for Microsoft's runtime?
ANSWER:
There are a couple of scenarios to consider: (a) if you are porting an existing application and wondering if Mono is good enough for this task; (b) you are starting to write some new code, and you want to know if Mono is mature enough. For the first case, you can use the Mono Migration Analyzer tool (Moma) to evaluate how far your application is from running on Mono. If the evaluation comes back with flying colors, you should start on your testing and QA and get ready to ship. If your evaluation comes back with a report highlighting features that are missing or differ significantly in their semantics in Mono you will have to evaluate whether the code can be adapted, rewritten or in the worst case whether your application can work with reduced functionality. According to our Moma statistics based on user submissions (this is from memory) about 50% of the applications work out of the box, about 25% require about a week worth of work (refactoring, adapting) another 15% require a serious commitment to redo chunks of your code, and the rest is just not worth bothering porting since they are so incredibly tied to Win32. At that point, either you start from zero, or a business decision will drive the effort to make your code portable, but we are talking months worth of work (at least from the reports we have). If you are starting from scratch, the situation is a lot simpler, because you will only be using the APIs that are present in Mono. As long as you stay with the supported stack (which is pretty much.NET 2.0, plus all the core upgrades in 3.5 including LINQ and System.Core, plus any of the Mono cross-platform APIs) you will be fine. Every once in a while you might run into bugs in Mono or limitations, and you might have to work around them, but that is not different than any other system. As for portability: ASP.NET applications are the easier ones to port, as those have little to no dependencies on Win32 and you can even use SQL server or other popular databases (there are plenty of bundled database providers with Mono). Windows.Forms porting is sometimes trickier because developers like to escape the.NET sandbox and P/Invoke their brains out to configure things as useful as the changing the cursor blinking rate expressed as two bezier points encoded in BCD form in a wParam. Or some junk like that. | [
".net",
"mono"
] | 314 | 402 | 24,846 | 17 | 0 | 2008-08-20T17:57:44.977000 | 2008-09-18T16:05:56.313000 |
18,460 | 18,469 | Best practise to authorize all users for just one page | What is the best way to authorize all users to one single page in a asp.net website. For except the login page and one other page, I deny all users from viewing pages in the website. How do you make this page accessible to all users? | I've been using forms authentication and creating the necessary GenericIdentity and CustomPrincipal objects that allows me to leverage the User.IsInRole type functions you typically only get with Windows authentication. That way in my web.config file, I can do stuff like... | Best practise to authorize all users for just one page What is the best way to authorize all users to one single page in a asp.net website. For except the login page and one other page, I deny all users from viewing pages in the website. How do you make this page accessible to all users? | TITLE:
Best practise to authorize all users for just one page
QUESTION:
What is the best way to authorize all users to one single page in a asp.net website. For except the login page and one other page, I deny all users from viewing pages in the website. How do you make this page accessible to all users?
ANSWER:
I've been using forms authentication and creating the necessary GenericIdentity and CustomPrincipal objects that allows me to leverage the User.IsInRole type functions you typically only get with Windows authentication. That way in my web.config file, I can do stuff like... | [
"asp.net",
"authorization"
] | 1 | 5 | 662 | 2 | 0 | 2008-08-20T18:04:58.233000 | 2008-08-20T18:09:05.400000 |
18,465 | 18,523 | .Net Parse versus Convert | In.Net you can read a string value into another data type using either.parse or Convert.To. I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate. So - which way is best in what type of circumstances? | The Convert.ToXXX() methods are for objects that might be of the correct or similar type, while.Parse() and.TryParse() are specifically for strings: //o is actually a boxed int object o = 12345;
//unboxes it int castVal = (int) 12345;
//o is a boxed enum object o = MyEnum.ValueA;
//this will get the underlying int of ValueA int convVal = Convert.ToInt32( o );
//now we have a string string s = "12345";
//this will throw an exception if s can't be parsed int parseVal = int.Parse( s );
//alternatively: int tryVal; if( int.TryParse( s, out tryVal ) ) { //do something with tryVal } If you compile with optimisation flags TryParse is very quick - it's the best way to get a number from a string. However if you have an object that might be an int or might be a string Convert.ToInt32 is quicker. | .Net Parse versus Convert In.Net you can read a string value into another data type using either.parse or Convert.To. I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate. So - which way is best in what type of circumstances? | TITLE:
.Net Parse versus Convert
QUESTION:
In.Net you can read a string value into another data type using either.parse or Convert.To. I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate. So - which way is best in what type of circumstances?
ANSWER:
The Convert.ToXXX() methods are for objects that might be of the correct or similar type, while.Parse() and.TryParse() are specifically for strings: //o is actually a boxed int object o = 12345;
//unboxes it int castVal = (int) 12345;
//o is a boxed enum object o = MyEnum.ValueA;
//this will get the underlying int of ValueA int convVal = Convert.ToInt32( o );
//now we have a string string s = "12345";
//this will throw an exception if s can't be parsed int parseVal = int.Parse( s );
//alternatively: int tryVal; if( int.TryParse( s, out tryVal ) ) { //do something with tryVal } If you compile with optimisation flags TryParse is very quick - it's the best way to get a number from a string. However if you have an object that might be an int or might be a string Convert.ToInt32 is quicker. | [
".net",
"parsing"
] | 13 | 15 | 1,695 | 5 | 0 | 2008-08-20T18:06:12.880000 | 2008-08-20T18:28:41.983000 |
18,505 | 18,756 | Sending a mouse click to a button in the taskbar using C# | In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible? | Check out the section "How to steal focus on 2K/XP" at http://www.codeproject.com/KB/dialog/dlgboxtricks.aspx, as this is exactly what you need. I wouldn't go the taskbar route as the taskbar could be hidden or simply not there. | Sending a mouse click to a button in the taskbar using C# In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible? | TITLE:
Sending a mouse click to a button in the taskbar using C#
QUESTION:
In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible?
ANSWER:
Check out the section "How to steal focus on 2K/XP" at http://www.codeproject.com/KB/dialog/dlgboxtricks.aspx, as this is exactly what you need. I wouldn't go the taskbar route as the taskbar could be hidden or simply not there. | [
"c#",
".net",
"windows",
"winapi"
] | 7 | 4 | 5,989 | 4 | 0 | 2008-08-20T18:22:55.320000 | 2008-08-20T20:33:53.900000 |
18,524 | 18,529 | Converting List<Integer> to List<String> | I have a list of integers, List and I'd like to convert all the integer objects into Strings, thus finishing up with a new List. Naturally, I could create a new List and loop through the list calling String.valueOf() for each integer, but I was wondering if there was a better (read: more automatic ) way of doing it? | As far as I know, iterate and instantiate is the only way to do this. Something like (for others potential help, since I'm sure you know how to do this): List oldList =... /* Specify the size of the list up front to prevent resizing. */ List newList = new ArrayList<>(oldList.size()); for (Integer myInt: oldList) { newList.add(String.valueOf(myInt)); } | Converting List<Integer> to List<String> I have a list of integers, List and I'd like to convert all the integer objects into Strings, thus finishing up with a new List. Naturally, I could create a new List and loop through the list calling String.valueOf() for each integer, but I was wondering if there was a better (read: more automatic ) way of doing it? | TITLE:
Converting List<Integer> to List<String>
QUESTION:
I have a list of integers, List and I'd like to convert all the integer objects into Strings, thus finishing up with a new List. Naturally, I could create a new List and loop through the list calling String.valueOf() for each integer, but I was wondering if there was a better (read: more automatic ) way of doing it?
ANSWER:
As far as I know, iterate and instantiate is the only way to do this. Something like (for others potential help, since I'm sure you know how to do this): List oldList =... /* Specify the size of the list up front to prevent resizing. */ List newList = new ArrayList<>(oldList.size()); for (Integer myInt: oldList) { newList.add(String.valueOf(myInt)); } | [
"java",
"string",
"collections",
"integer"
] | 120 | 83 | 244,643 | 22 | 0 | 2008-08-20T18:29:48.410000 | 2008-08-20T18:34:07.197000 |
18,533 | 18,539 | C#: What Else Do You Use Besides DataSet | I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in.Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead. I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in.Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options. I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks! -Eric Sipple | Since.NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more. As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck. | C#: What Else Do You Use Besides DataSet I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in.Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead. I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in.Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options. I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks! -Eric Sipple | TITLE:
C#: What Else Do You Use Besides DataSet
QUESTION:
I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in.Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead. I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in.Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options. I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks! -Eric Sipple
ANSWER:
Since.NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more. As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck. | [
"c#",
".net",
"sql",
"dataset"
] | 21 | 21 | 10,038 | 13 | 0 | 2008-08-20T18:36:02.310000 | 2008-08-20T18:38:04.177000 |
18,538 | 5,645,413 | shortcut for creating a Map from a List in groovy? | I'd like some sorthand for this: Map rowToMap(row) { def rowMap = [:]; row.columns.each{ rowMap[it.name] = it.val } return rowMap; } given the way the GDK stuff is, I'd expect to be able to do something like: Map rowToMap(row) { row.columns.collectMap{ [it.name,it.val] } } but I haven't seen anything in the docs... am I missing something? or am I just way too lazy? | I've recently came across the need to do exactly that: converting a list into a map. This question was posted before Groovy version 1.7.9 came out, so the method collectEntries didn't exist yet. It works exactly as the collectMap method that was proposed: Map rowToMap(row) { row.columns.collectEntries{[it.name, it.val]} } If for some reason you are stuck with an older Groovy version, the inject method can also be used (as proposed here ). This is a slightly modified version that takes only one expression inside the closure (just for the sake of character saving!): Map rowToMap(row) { row.columns.inject([:]) {map, col -> map << [(col.name): col.val]} } The + operator can also be used instead of the <<. | shortcut for creating a Map from a List in groovy? I'd like some sorthand for this: Map rowToMap(row) { def rowMap = [:]; row.columns.each{ rowMap[it.name] = it.val } return rowMap; } given the way the GDK stuff is, I'd expect to be able to do something like: Map rowToMap(row) { row.columns.collectMap{ [it.name,it.val] } } but I haven't seen anything in the docs... am I missing something? or am I just way too lazy? | TITLE:
shortcut for creating a Map from a List in groovy?
QUESTION:
I'd like some sorthand for this: Map rowToMap(row) { def rowMap = [:]; row.columns.each{ rowMap[it.name] = it.val } return rowMap; } given the way the GDK stuff is, I'd expect to be able to do something like: Map rowToMap(row) { row.columns.collectMap{ [it.name,it.val] } } but I haven't seen anything in the docs... am I missing something? or am I just way too lazy?
ANSWER:
I've recently came across the need to do exactly that: converting a list into a map. This question was posted before Groovy version 1.7.9 came out, so the method collectEntries didn't exist yet. It works exactly as the collectMap method that was proposed: Map rowToMap(row) { row.columns.collectEntries{[it.name, it.val]} } If for some reason you are stuck with an older Groovy version, the inject method can also be used (as proposed here ). This is a slightly modified version that takes only one expression inside the closure (just for the sake of character saving!): Map rowToMap(row) { row.columns.inject([:]) {map, col -> map << [(col.name): col.val]} } The + operator can also be used instead of the <<. | [
"java",
"collections",
"groovy",
"expandometaclass"
] | 125 | 147 | 87,667 | 8 | 0 | 2008-08-20T18:37:51.370000 | 2011-04-13T06:47:13.700000 |
18,584 | 18,718 | Exact age calculation | Possible Duplicate: How do I calculate someone's age in C#? Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, isn't it better to calculate the exact age? Maybe DateTime dt1 = DateTime.Now; TimeSpan dt2; dt2 = dt1.Subtract(new DateTime(1975, 12, 01)); double year = dt2.TotalDays / 365; The result of year is 32.77405678074 Could this code be OK? | If you were born on January 12th 1975, you would be 33 years old today. If you were born on December 1st 1975, you would be 32 years old today. If you read the note by the birthday field when editing your profile you'll see it says "YYYY/MM/DD", I'm sure it will try to interpret dates of other formats but it looks like it interprets MM/DD/YYYY (US standard dates) in preference to DD/MM/YYYY (European standard dates). The easy fix is to enter the date of your birthday according to the suggested input style. | Exact age calculation Possible Duplicate: How do I calculate someone's age in C#? Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, isn't it better to calculate the exact age? Maybe DateTime dt1 = DateTime.Now; TimeSpan dt2; dt2 = dt1.Subtract(new DateTime(1975, 12, 01)); double year = dt2.TotalDays / 365; The result of year is 32.77405678074 Could this code be OK? | TITLE:
Exact age calculation
QUESTION:
Possible Duplicate: How do I calculate someone's age in C#? Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, isn't it better to calculate the exact age? Maybe DateTime dt1 = DateTime.Now; TimeSpan dt2; dt2 = dt1.Subtract(new DateTime(1975, 12, 01)); double year = dt2.TotalDays / 365; The result of year is 32.77405678074 Could this code be OK?
ANSWER:
If you were born on January 12th 1975, you would be 33 years old today. If you were born on December 1st 1975, you would be 32 years old today. If you read the note by the birthday field when editing your profile you'll see it says "YYYY/MM/DD", I'm sure it will try to interpret dates of other formats but it looks like it interprets MM/DD/YYYY (US standard dates) in preference to DD/MM/YYYY (European standard dates). The easy fix is to enter the date of your birthday according to the suggested input style. | [
"c#"
] | 1 | 0 | 7,595 | 4 | 0 | 2008-08-20T18:59:03.377000 | 2008-08-20T20:18:23.547000 |
18,585 | 19,056 | Why can't you bind the Size of a windows form to ApplicationSettings? | Update: Solved, with code I got it working, see my answer below for the code... Original Post As Tundey pointed out in his answer to my last question, you can bind nearly everything about a windows forms control to ApplicationSettings pretty effortlessly. So is there really no way to do this with form Size? This tutorial says you need to handle Size explicitly so you can save RestoreBounds instead of size if the window is maximized or minimized. However, I hoped I could just use a property like: public Size RestoreSize { get { if (this.WindowState == FormWindowState.Normal) { return this.Size; } else { return this.RestoreBounds.Size; } } set {... } } But I can't see a way to bind this in the designer (Size is notably missing from the PropertyBinding list). | I finally came up with a Form subclass that solves this, once and for all. To use it: Inherit from RestorableForm instead of Form. Add a binding in (ApplicationSettings) -> (PropertyBinding) to WindowRestoreState. Call Properties.Settings.Default.Save() when the window is about to close. Now window position and state will be remembered between sessions. Following the suggestions from other posters below, I included a function ConstrainToScreen that makes sure the window fits nicely on the available displays when restoring itself. Code // Consider this code public domain. If you want, you can even tell // your boss, attractive women, or the other guy in your cube that // you wrote it. Enjoy!
using System; using System.Windows.Forms; using System.ComponentModel; using System.Drawing;
namespace Utilities { public class RestorableForm: Form, INotifyPropertyChanged { // We invoke this event when the binding needs to be updated. public event PropertyChangedEventHandler PropertyChanged;
// This stores the last window position and state private WindowRestoreStateInfo windowRestoreState;
// Now we define the property that we will bind to our settings. [Browsable(false)] // Don't show it in the Properties list [SettingsBindable(true)] // But do enable binding to settings public WindowRestoreStateInfo WindowRestoreState { get { return windowRestoreState; } set { windowRestoreState = value; if (PropertyChanged!= null) { // If anybody's listening, let them know the // binding needs to be updated: PropertyChanged(this, new PropertyChangedEventArgs("WindowRestoreState")); } } }
protected override void OnClosing(CancelEventArgs e) { WindowRestoreState = new WindowRestoreStateInfo(); WindowRestoreState.Bounds = WindowState == FormWindowState.Normal? Bounds: RestoreBounds; WindowRestoreState.WindowState = WindowState;
base.OnClosing(e); }
protected override void OnLoad(EventArgs e) { base.OnLoad(e);
if (WindowRestoreState!= null) { Bounds = ConstrainToScreen(WindowRestoreState.Bounds); WindowState = WindowRestoreState.WindowState; } }
// This helper class stores both position and state. // That way, we only have to set one binding. public class WindowRestoreStateInfo { Rectangle bounds; public Rectangle Bounds { get { return bounds; } set { bounds = value; } }
FormWindowState windowState; public FormWindowState WindowState { get { return windowState; } set { windowState = value; } } }
private Rectangle ConstrainToScreen(Rectangle bounds) { Screen screen = Screen.FromRectangle(WindowRestoreState.Bounds); Rectangle workingArea = screen.WorkingArea;
int width = Math.Min(bounds.Width, workingArea.Width); int height = Math.Min(bounds.Height, workingArea.Height);
// mmm....minimax int left = Math.Min(workingArea.Right - width, Math.Max(bounds.Left, workingArea.Left)); int top = Math.Min(workingArea.Bottom - height, Math.Max(bounds.Top, workingArea.Top));
return new Rectangle(left, top, width, height); } } } Settings Bindings References SettingsBindableAttribute INotifyPropertyChanged | Why can't you bind the Size of a windows form to ApplicationSettings? Update: Solved, with code I got it working, see my answer below for the code... Original Post As Tundey pointed out in his answer to my last question, you can bind nearly everything about a windows forms control to ApplicationSettings pretty effortlessly. So is there really no way to do this with form Size? This tutorial says you need to handle Size explicitly so you can save RestoreBounds instead of size if the window is maximized or minimized. However, I hoped I could just use a property like: public Size RestoreSize { get { if (this.WindowState == FormWindowState.Normal) { return this.Size; } else { return this.RestoreBounds.Size; } } set {... } } But I can't see a way to bind this in the designer (Size is notably missing from the PropertyBinding list). | TITLE:
Why can't you bind the Size of a windows form to ApplicationSettings?
QUESTION:
Update: Solved, with code I got it working, see my answer below for the code... Original Post As Tundey pointed out in his answer to my last question, you can bind nearly everything about a windows forms control to ApplicationSettings pretty effortlessly. So is there really no way to do this with form Size? This tutorial says you need to handle Size explicitly so you can save RestoreBounds instead of size if the window is maximized or minimized. However, I hoped I could just use a property like: public Size RestoreSize { get { if (this.WindowState == FormWindowState.Normal) { return this.Size; } else { return this.RestoreBounds.Size; } } set {... } } But I can't see a way to bind this in the designer (Size is notably missing from the PropertyBinding list).
ANSWER:
I finally came up with a Form subclass that solves this, once and for all. To use it: Inherit from RestorableForm instead of Form. Add a binding in (ApplicationSettings) -> (PropertyBinding) to WindowRestoreState. Call Properties.Settings.Default.Save() when the window is about to close. Now window position and state will be remembered between sessions. Following the suggestions from other posters below, I included a function ConstrainToScreen that makes sure the window fits nicely on the available displays when restoring itself. Code // Consider this code public domain. If you want, you can even tell // your boss, attractive women, or the other guy in your cube that // you wrote it. Enjoy!
using System; using System.Windows.Forms; using System.ComponentModel; using System.Drawing;
namespace Utilities { public class RestorableForm: Form, INotifyPropertyChanged { // We invoke this event when the binding needs to be updated. public event PropertyChangedEventHandler PropertyChanged;
// This stores the last window position and state private WindowRestoreStateInfo windowRestoreState;
// Now we define the property that we will bind to our settings. [Browsable(false)] // Don't show it in the Properties list [SettingsBindable(true)] // But do enable binding to settings public WindowRestoreStateInfo WindowRestoreState { get { return windowRestoreState; } set { windowRestoreState = value; if (PropertyChanged!= null) { // If anybody's listening, let them know the // binding needs to be updated: PropertyChanged(this, new PropertyChangedEventArgs("WindowRestoreState")); } } }
protected override void OnClosing(CancelEventArgs e) { WindowRestoreState = new WindowRestoreStateInfo(); WindowRestoreState.Bounds = WindowState == FormWindowState.Normal? Bounds: RestoreBounds; WindowRestoreState.WindowState = WindowState;
base.OnClosing(e); }
protected override void OnLoad(EventArgs e) { base.OnLoad(e);
if (WindowRestoreState!= null) { Bounds = ConstrainToScreen(WindowRestoreState.Bounds); WindowState = WindowRestoreState.WindowState; } }
// This helper class stores both position and state. // That way, we only have to set one binding. public class WindowRestoreStateInfo { Rectangle bounds; public Rectangle Bounds { get { return bounds; } set { bounds = value; } }
FormWindowState windowState; public FormWindowState WindowState { get { return windowState; } set { windowState = value; } } }
private Rectangle ConstrainToScreen(Rectangle bounds) { Screen screen = Screen.FromRectangle(WindowRestoreState.Bounds); Rectangle workingArea = screen.WorkingArea;
int width = Math.Min(bounds.Width, workingArea.Width); int height = Math.Min(bounds.Height, workingArea.Height);
// mmm....minimax int left = Math.Min(workingArea.Right - width, Math.Max(bounds.Left, workingArea.Left)); int top = Math.Min(workingArea.Bottom - height, Math.Max(bounds.Top, workingArea.Top));
return new Rectangle(left, top, width, height); } } } Settings Bindings References SettingsBindableAttribute INotifyPropertyChanged | [
"c#",
"visual-studio",
"data-binding",
".net-2.0"
] | 4 | 12 | 5,084 | 5 | 0 | 2008-08-20T18:59:35.187000 | 2008-08-20T23:06:42.563000 |
18,601 | 18,771 | Best practice for integrating TDD with web application development? | Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests. The cause of this pain point generally revolves around the hassle of writing UI automation mid-development. How do you or your organization integrate best TDD practices with web application development? | Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation. All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually. To test the GUI some people like selenium. Others complain that is a pain to set up. | Best practice for integrating TDD with web application development? Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests. The cause of this pain point generally revolves around the hassle of writing UI automation mid-development. How do you or your organization integrate best TDD practices with web application development? | TITLE:
Best practice for integrating TDD with web application development?
QUESTION:
Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests. The cause of this pain point generally revolves around the hassle of writing UI automation mid-development. How do you or your organization integrate best TDD practices with web application development?
ANSWER:
Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation. All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually. To test the GUI some people like selenium. Others complain that is a pain to set up. | [
"unit-testing",
"tdd"
] | 26 | 17 | 6,388 | 7 | 0 | 2008-08-20T19:07:51.593000 | 2008-08-20T20:41:31.337000 |
18,606 | 18,647 | Accessing an Exchange Server without Outlook | Is there a method of accessing an Exchange server that does not have IMAP or POP3 enabled without Outlook? It does not appear that Outlook Express supports Exchange (only IMAP and POP3 ). | The only way I can think of is if the Exchange server has Outlook Web Access (OWA) turned on. You can test this by trying the server name in your browser like so: http://server/exchange. If you mean programmatically then the recommended way is to use WebDAV (which is what OWA uses). @Jon I think the method you linked to uses IMAP. Edit: @Pat: SimpleMAPI is the protocol that allows applications such as Word etc to talk to your email client, not your email client to the server - ExtendedMAPI is needed for that, which Thunderbird doesn't support. | Accessing an Exchange Server without Outlook Is there a method of accessing an Exchange server that does not have IMAP or POP3 enabled without Outlook? It does not appear that Outlook Express supports Exchange (only IMAP and POP3 ). | TITLE:
Accessing an Exchange Server without Outlook
QUESTION:
Is there a method of accessing an Exchange server that does not have IMAP or POP3 enabled without Outlook? It does not appear that Outlook Express supports Exchange (only IMAP and POP3 ).
ANSWER:
The only way I can think of is if the Exchange server has Outlook Web Access (OWA) turned on. You can test this by trying the server name in your browser like so: http://server/exchange. If you mean programmatically then the recommended way is to use WebDAV (which is what OWA uses). @Jon I think the method you linked to uses IMAP. Edit: @Pat: SimpleMAPI is the protocol that allows applications such as Word etc to talk to your email client, not your email client to the server - ExtendedMAPI is needed for that, which Thunderbird doesn't support. | [
"email",
"outlook",
"exchange-server",
"email-client"
] | 9 | 5 | 17,972 | 7 | 0 | 2008-08-20T19:11:32.723000 | 2008-08-20T19:32:05.420000 |
18,608 | 18,616 | Is it a bad idea to expose inheritance hierarchy in namespace structure? | I've got a group of inter-related classes that are all overridden together to create a particular implementation. I'm wondering if it is a good idea to enclose the interrelated subclasses in a namespace. For example purposes, consider the following namespaces and classes: namespace Protocol { public abstract class Message { } public abstract class Driver { } } namespace Protocol.Tcp { public class TcpMessage: Message { } public class TcpDriver: Driver { } } namespace Protocol.Ftp { public class FtpMessage: Message { } public class FtpDriver: Driver { } } What is the best way to structure the namespaces? It seems unavoidable to expose the inheritance in the namespace since the base classes don't really belong in either the Protocol.Tcp namespace or the Protocol.Ftp namespace. | I think you are perhaps worrying too much! Does it make sense logically? Do you know where to find your code within the namespaces? I would much rather see a codebase like the above with a small number of classes, relevant to the name with a hierarchy, than one large namespace where everything is interrelated.. Remember, namespacing is there for precisely this, to organise your codebase logically What you have seems logical:) EDIT: As an example: using System.Data; using System.Data.Sql;;) | Is it a bad idea to expose inheritance hierarchy in namespace structure? I've got a group of inter-related classes that are all overridden together to create a particular implementation. I'm wondering if it is a good idea to enclose the interrelated subclasses in a namespace. For example purposes, consider the following namespaces and classes: namespace Protocol { public abstract class Message { } public abstract class Driver { } } namespace Protocol.Tcp { public class TcpMessage: Message { } public class TcpDriver: Driver { } } namespace Protocol.Ftp { public class FtpMessage: Message { } public class FtpDriver: Driver { } } What is the best way to structure the namespaces? It seems unavoidable to expose the inheritance in the namespace since the base classes don't really belong in either the Protocol.Tcp namespace or the Protocol.Ftp namespace. | TITLE:
Is it a bad idea to expose inheritance hierarchy in namespace structure?
QUESTION:
I've got a group of inter-related classes that are all overridden together to create a particular implementation. I'm wondering if it is a good idea to enclose the interrelated subclasses in a namespace. For example purposes, consider the following namespaces and classes: namespace Protocol { public abstract class Message { } public abstract class Driver { } } namespace Protocol.Tcp { public class TcpMessage: Message { } public class TcpDriver: Driver { } } namespace Protocol.Ftp { public class FtpMessage: Message { } public class FtpDriver: Driver { } } What is the best way to structure the namespaces? It seems unavoidable to expose the inheritance in the namespace since the base classes don't really belong in either the Protocol.Tcp namespace or the Protocol.Ftp namespace.
ANSWER:
I think you are perhaps worrying too much! Does it make sense logically? Do you know where to find your code within the namespaces? I would much rather see a codebase like the above with a small number of classes, relevant to the name with a hierarchy, than one large namespace where everything is interrelated.. Remember, namespacing is there for precisely this, to organise your codebase logically What you have seems logical:) EDIT: As an example: using System.Data; using System.Data.Sql;;) | [
"c#",
"oop",
"inheritance",
"naming",
"convention"
] | 4 | 6 | 865 | 3 | 0 | 2008-08-20T19:12:05.077000 | 2008-08-20T19:16:07.907000 |
18,617 | 18,623 | How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses? | How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses? | Several connectors are configured, and each connector has an optional "address" attribute where you can set the IP address. Edit tomcat/conf/server.xml. Specify a bind address for that connector: | How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses? How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses? | TITLE:
How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses?
QUESTION:
How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses?
ANSWER:
Several connectors are configured, and each connector has an optional "address" attribute where you can set the IP address. Edit tomcat/conf/server.xml. Specify a bind address for that connector: | [
"tomcat"
] | 84 | 147 | 218,635 | 3 | 0 | 2008-08-20T19:16:20.810000 | 2008-08-20T19:19:46.370000 |
18,632 | 18,635 | How to monitor a text file in realtime | For debugging purposes in a somewhat closed system, I have to output text to a file. Does anyone know of a tool that runs on windows (console based or not) that detects changes to a file and outputs them in real-time? | Tail for Win32 Apache Chainsaw - used this with log4net logs, may require file to be in a certain format | How to monitor a text file in realtime For debugging purposes in a somewhat closed system, I have to output text to a file. Does anyone know of a tool that runs on windows (console based or not) that detects changes to a file and outputs them in real-time? | TITLE:
How to monitor a text file in realtime
QUESTION:
For debugging purposes in a somewhat closed system, I have to output text to a file. Does anyone know of a tool that runs on windows (console based or not) that detects changes to a file and outputs them in real-time?
ANSWER:
Tail for Win32 Apache Chainsaw - used this with log4net logs, may require file to be in a certain format | [
"monitoring",
"text-files"
] | 78 | 36 | 115,150 | 23 | 0 | 2008-08-20T19:25:07.153000 | 2008-08-20T19:26:25.013000 |
18,671 | 6,000,355 | Quick easy way to migrate SQLite3 to MySQL? | Anyone know a quick easy way to migrate a SQLite3 database to MySQL? | Here is a list of converters (not updated since 2011): https://www2.sqlite.org/cvstrac/wiki?p=ConverterTools (or snapshot at archive.org ) An alternative method that would work nicely but is rarely mentioned is: use an ORM class that abstracts specific database differences away for you. e.g. you get these in PHP ( RedBean ), Python (Django's ORM layer, Storm, SqlAlchemy ), Ruby on Rails ( ActiveRecord ), Cocoa ( CoreData ) i.e. you could do this: Load data from source database using the ORM class. Store data in memory or serialize to disk. Store data into destination database using the ORM class. | Quick easy way to migrate SQLite3 to MySQL? Anyone know a quick easy way to migrate a SQLite3 database to MySQL? | TITLE:
Quick easy way to migrate SQLite3 to MySQL?
QUESTION:
Anyone know a quick easy way to migrate a SQLite3 database to MySQL?
ANSWER:
Here is a list of converters (not updated since 2011): https://www2.sqlite.org/cvstrac/wiki?p=ConverterTools (or snapshot at archive.org ) An alternative method that would work nicely but is rarely mentioned is: use an ORM class that abstracts specific database differences away for you. e.g. you get these in PHP ( RedBean ), Python (Django's ORM layer, Storm, SqlAlchemy ), Ruby on Rails ( ActiveRecord ), Cocoa ( CoreData ) i.e. you could do this: Load data from source database using the ORM class. Store data in memory or serialize to disk. Store data into destination database using the ORM class. | [
"mysql",
"sqlite",
"migration"
] | 261 | 68 | 364,678 | 17 | 0 | 2008-08-20T19:49:13.337000 | 2011-05-14T07:11:45.367000 |
18,675 | 21,743 | I/O permission settings using .net installer | I am creating a program that will be installed using the.net installer project. The program writes to settings files to its directory in the Program Files dir. It believe there are some active directory settings that will prevent the application from righting to that directory if a limited user is running the program. Is there away to change the settings for the application folder through the install so this will not be a problem? | Writing to the Program Files folder is a really bad idea, you should assume that this location is "read only" once installed. Saving user settings in Program Files causes problems if more than two people use the computer at once (eg. Terminal Services) who's settings should be saved, do you want other users to know 'your' settings? What happens if your program writes settings to the file as user A, but user B can't edit the file? User B may have access to the directory, but not read/delete the preference file as this is owned by user A. Legacy win9x programs often write to the program files folder, Windows Vista actually does some neat trickery to let these programs work. When your program writes a file, vista actually puts it someplace else that is only accessible to that user. The same is done for registry writes to HKLM (or so I discovered after hours of debugging...) and Server 2008 does the same thing. If you're needing to save user settings the best alternative would be to save the settings to the Application Data folder (Environment Variable %APPDATA%) If the settings are system wide, then the administrative user should set these after install or on first run and they should not be able to be overwritten by limited users. So to answer your question - YES there is a way to do what you've asked. But it's a bad idea, it's insecure and will probably cause problems in the long run. | I/O permission settings using .net installer I am creating a program that will be installed using the.net installer project. The program writes to settings files to its directory in the Program Files dir. It believe there are some active directory settings that will prevent the application from righting to that directory if a limited user is running the program. Is there away to change the settings for the application folder through the install so this will not be a problem? | TITLE:
I/O permission settings using .net installer
QUESTION:
I am creating a program that will be installed using the.net installer project. The program writes to settings files to its directory in the Program Files dir. It believe there are some active directory settings that will prevent the application from righting to that directory if a limited user is running the program. Is there away to change the settings for the application folder through the install so this will not be a problem?
ANSWER:
Writing to the Program Files folder is a really bad idea, you should assume that this location is "read only" once installed. Saving user settings in Program Files causes problems if more than two people use the computer at once (eg. Terminal Services) who's settings should be saved, do you want other users to know 'your' settings? What happens if your program writes settings to the file as user A, but user B can't edit the file? User B may have access to the directory, but not read/delete the preference file as this is owned by user A. Legacy win9x programs often write to the program files folder, Windows Vista actually does some neat trickery to let these programs work. When your program writes a file, vista actually puts it someplace else that is only accessible to that user. The same is done for registry writes to HKLM (or so I discovered after hours of debugging...) and Server 2008 does the same thing. If you're needing to save user settings the best alternative would be to save the settings to the Application Data folder (Environment Variable %APPDATA%) If the settings are system wide, then the administrative user should set these after install or on first run and they should not be able to be overwritten by limited users. So to answer your question - YES there is a way to do what you've asked. But it's a bad idea, it's insecure and will probably cause problems in the long run. | [
".net",
"installation",
"active-directory",
"io"
] | 1 | 2 | 836 | 2 | 0 | 2008-08-20T19:52:41.127000 | 2008-08-22T03:47:43.130000 |
18,676 | 18,680 | Random integer in VB.NET | I need to generate a random integer between 1 and n (where n is a positive whole number) to use for a unit test. I don't need something overly complicated to ensure true randomness - just an old-fashioned random number. How would I do that? | To get a random integer value between 1 and N (inclusive) you can use the following. CInt(Math.Ceiling(Rnd() * n)) + 1 | Random integer in VB.NET I need to generate a random integer between 1 and n (where n is a positive whole number) to use for a unit test. I don't need something overly complicated to ensure true randomness - just an old-fashioned random number. How would I do that? | TITLE:
Random integer in VB.NET
QUESTION:
I need to generate a random integer between 1 and n (where n is a positive whole number) to use for a unit test. I don't need something overly complicated to ensure true randomness - just an old-fashioned random number. How would I do that?
ANSWER:
To get a random integer value between 1 and N (inclusive) you can use the following. CInt(Math.Ceiling(Rnd() * n)) + 1 | [
"vb.net",
"random"
] | 65 | 67 | 382,058 | 12 | 0 | 2008-08-20T19:54:00.407000 | 2008-08-20T19:55:53.693000 |
18,685 | 18,693 | How to display "12 minutes ago" etc in a PHP webpage? | Can anyone tell me how I can display a status message like "12 seconds ago" or "5 minutes ago" etc in a web page? | Here is the php code for the same: function time_since($since) { $chunks = array( array(60 * 60 * 24 * 365, 'year'), array(60 * 60 * 24 * 30, 'month'), array(60 * 60 * 24 * 7, 'week'), array(60 * 60 * 24, 'day'), array(60 * 60, 'hour'), array(60, 'minute'), array(1, 'second') );
for ($i = 0, $j = count($chunks); $i < $j; $i++) { $seconds = $chunks[$i][0]; $name = $chunks[$i][1]; if (($count = floor($since / $seconds))!= 0) { break; } }
$print = ($count == 1)? '1 '.$name: "$count {$name}s"; return $print; } The function takes the number of seconds as input and outputs text such as: 10 seconds 1 minute etc | How to display "12 minutes ago" etc in a PHP webpage? Can anyone tell me how I can display a status message like "12 seconds ago" or "5 minutes ago" etc in a web page? | TITLE:
How to display "12 minutes ago" etc in a PHP webpage?
QUESTION:
Can anyone tell me how I can display a status message like "12 seconds ago" or "5 minutes ago" etc in a web page?
ANSWER:
Here is the php code for the same: function time_since($since) { $chunks = array( array(60 * 60 * 24 * 365, 'year'), array(60 * 60 * 24 * 30, 'month'), array(60 * 60 * 24 * 7, 'week'), array(60 * 60 * 24, 'day'), array(60 * 60, 'hour'), array(60, 'minute'), array(1, 'second') );
for ($i = 0, $j = count($chunks); $i < $j; $i++) { $seconds = $chunks[$i][0]; $name = $chunks[$i][1]; if (($count = floor($since / $seconds))!= 0) { break; } }
$print = ($count == 1)? '1 '.$name: "$count {$name}s"; return $print; } The function takes the number of seconds as input and outputs text such as: 10 seconds 1 minute etc | [
"php"
] | 34 | 73 | 32,140 | 3 | 0 | 2008-08-20T19:59:57.490000 | 2008-08-20T20:02:48.933000 |
18,696 | 18,714 | GUI Automation testing - Window handle questions | Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us. By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input. In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption). Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step. Is there any other (simpler) way for doing such things (for example the message queue or anything else)? | If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in. | GUI Automation testing - Window handle questions Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us. By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input. In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption). Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step. Is there any other (simpler) way for doing such things (for example the message queue or anything else)? | TITLE:
GUI Automation testing - Window handle questions
QUESTION:
Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us. By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input. In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption). Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step. Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
ANSWER:
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in. | [
"ui-automation",
"window-handles",
"gui-testing"
] | 3 | 1 | 2,523 | 5 | 0 | 2008-08-20T20:03:28.840000 | 2008-08-20T20:17:00.303000 |
18,702 | 18,750 | Webservice alive forever | I often use webservice this way public void CallWebservice()
{
mywebservice web = new mywebservice();
web.call();
} but sometimes I do this private mywebservice web;
public Constructor() {
web = new mywebservice();
}
public void CallWebservice() {
web.call();
} The second approach likes me very much but sometimes it times out and I had to start the application again, the first one I think it brings overhead and it is not very efficient, in fact, sometimes the first call returns a WebException - ConnectFailure (I don't know why). I found an article ( Web Service Woes (A light at the end of the tunnel?) ) that exceeds the time out turning the KeepAlive property to false in the overriden function GetWebRequest, here's is the code: Protected Overrides Function GetWebRequest(ByVal uri As System.Uri) As System.Net.WebRequest
Dim webRequest As Net.HttpWebRequest = CType(MyBase.GetWebRequest(uri), Net.HttpWebRequest)
webRequest.KeepAlive = False
Return webRequest
End Function The question is, is it possible to extend forever the webservice time out and finally, how do you implement your webservices to handle this issue? | The classes generated by Visual Studio for webservices are just proxies with little state so creating them is pretty cheap. I wouldn't worry about memory consumption for them. If what you are looking for is a way to call the webmethod in one line you can simply do this: new mywebservice().call() Cheers | Webservice alive forever I often use webservice this way public void CallWebservice()
{
mywebservice web = new mywebservice();
web.call();
} but sometimes I do this private mywebservice web;
public Constructor() {
web = new mywebservice();
}
public void CallWebservice() {
web.call();
} The second approach likes me very much but sometimes it times out and I had to start the application again, the first one I think it brings overhead and it is not very efficient, in fact, sometimes the first call returns a WebException - ConnectFailure (I don't know why). I found an article ( Web Service Woes (A light at the end of the tunnel?) ) that exceeds the time out turning the KeepAlive property to false in the overriden function GetWebRequest, here's is the code: Protected Overrides Function GetWebRequest(ByVal uri As System.Uri) As System.Net.WebRequest
Dim webRequest As Net.HttpWebRequest = CType(MyBase.GetWebRequest(uri), Net.HttpWebRequest)
webRequest.KeepAlive = False
Return webRequest
End Function The question is, is it possible to extend forever the webservice time out and finally, how do you implement your webservices to handle this issue? | TITLE:
Webservice alive forever
QUESTION:
I often use webservice this way public void CallWebservice()
{
mywebservice web = new mywebservice();
web.call();
} but sometimes I do this private mywebservice web;
public Constructor() {
web = new mywebservice();
}
public void CallWebservice() {
web.call();
} The second approach likes me very much but sometimes it times out and I had to start the application again, the first one I think it brings overhead and it is not very efficient, in fact, sometimes the first call returns a WebException - ConnectFailure (I don't know why). I found an article ( Web Service Woes (A light at the end of the tunnel?) ) that exceeds the time out turning the KeepAlive property to false in the overriden function GetWebRequest, here's is the code: Protected Overrides Function GetWebRequest(ByVal uri As System.Uri) As System.Net.WebRequest
Dim webRequest As Net.HttpWebRequest = CType(MyBase.GetWebRequest(uri), Net.HttpWebRequest)
webRequest.KeepAlive = False
Return webRequest
End Function The question is, is it possible to extend forever the webservice time out and finally, how do you implement your webservices to handle this issue?
ANSWER:
The classes generated by Visual Studio for webservices are just proxies with little state so creating them is pretty cheap. I wouldn't worry about memory consumption for them. If what you are looking for is a way to call the webmethod in one line you can simply do this: new mywebservice().call() Cheers | [
"web-services"
] | 1 | 1 | 296 | 1 | 0 | 2008-08-20T20:07:34.057000 | 2008-08-20T20:32:34.777000 |
18,705 | 19,071 | How to create Projects/Tasks for Project Server 2003 via C#? | I need to be able to create basic MS Project items (tasks, projects, resources, etc.) programmatically from my app to my Project Server 2003 install, and haven't found any good examples. Can anyone point me to some good references or have some sample code of connecting to the server and creating these items? | Developing against Project Server 2003 isn't the friendliest experience around, but I have worked a little bit with the PDS (Project Data Services) which is SOAP based http://msdn.microsoft.com/en-us/library/aa204408(office.11).aspx It contains.NET samples there | How to create Projects/Tasks for Project Server 2003 via C#? I need to be able to create basic MS Project items (tasks, projects, resources, etc.) programmatically from my app to my Project Server 2003 install, and haven't found any good examples. Can anyone point me to some good references or have some sample code of connecting to the server and creating these items? | TITLE:
How to create Projects/Tasks for Project Server 2003 via C#?
QUESTION:
I need to be able to create basic MS Project items (tasks, projects, resources, etc.) programmatically from my app to my Project Server 2003 install, and haven't found any good examples. Can anyone point me to some good references or have some sample code of connecting to the server and creating these items?
ANSWER:
Developing against Project Server 2003 isn't the friendliest experience around, but I have worked a little bit with the PDS (Project Data Services) which is SOAP based http://msdn.microsoft.com/en-us/library/aa204408(office.11).aspx It contains.NET samples there | [
"c#",
"project-server"
] | 0 | 1 | 333 | 2 | 0 | 2008-08-20T20:09:45.393000 | 2008-08-20T23:18:43.237000 |
18,717 | 18,723 | Are foreign keys really necessary in a database design? | As far as I know, foreign keys (FK) are used to aid the programmer to manipulate data in the correct way. Suppose a programmer is actually doing this in the right manner already, then do we really need the concept of foreign keys? Are there any other uses for foreign keys? Am I missing something here? | Foreign keys help enforce referential integrity at the data level. They also improve performance because they're normally indexed by default. | Are foreign keys really necessary in a database design? As far as I know, foreign keys (FK) are used to aid the programmer to manipulate data in the correct way. Suppose a programmer is actually doing this in the right manner already, then do we really need the concept of foreign keys? Are there any other uses for foreign keys? Am I missing something here? | TITLE:
Are foreign keys really necessary in a database design?
QUESTION:
As far as I know, foreign keys (FK) are used to aid the programmer to manipulate data in the correct way. Suppose a programmer is actually doing this in the right manner already, then do we really need the concept of foreign keys? Are there any other uses for foreign keys? Am I missing something here?
ANSWER:
Foreign keys help enforce referential integrity at the data level. They also improve performance because they're normally indexed by default. | [
"database",
"oracle",
"foreign-keys"
] | 138 | 111 | 48,013 | 24 | 0 | 2008-08-20T20:18:08.757000 | 2008-08-20T20:19:33.997000 |
18,754 | 18,831 | How can I convert Markdown documents to HTML en masse? | I'm writing some documentation in Markdown, and creating a separate file for each section of the doc. I would like to be able to convert all the files to HTML in one go, but I can't find anyone else who has tried the same thing. I'm on a Mac, so I would think a simple bash script should be able to handle it, but I've never done anything in bash and haven't had any luck. It seems like it should be simple to write something so I could just run: markdown-batch./*.markdown Any ideas? | This is how you would do it in Bash. for i in./*.markdown; do perl markdown.pl --html4tags $i > ${i%.*}.html; done; Of course, you need the Markdown script. | How can I convert Markdown documents to HTML en masse? I'm writing some documentation in Markdown, and creating a separate file for each section of the doc. I would like to be able to convert all the files to HTML in one go, but I can't find anyone else who has tried the same thing. I'm on a Mac, so I would think a simple bash script should be able to handle it, but I've never done anything in bash and haven't had any luck. It seems like it should be simple to write something so I could just run: markdown-batch./*.markdown Any ideas? | TITLE:
How can I convert Markdown documents to HTML en masse?
QUESTION:
I'm writing some documentation in Markdown, and creating a separate file for each section of the doc. I would like to be able to convert all the files to HTML in one go, but I can't find anyone else who has tried the same thing. I'm on a Mac, so I would think a simple bash script should be able to handle it, but I've never done anything in bash and haven't had any luck. It seems like it should be simple to write something so I could just run: markdown-batch./*.markdown Any ideas?
ANSWER:
This is how you would do it in Bash. for i in./*.markdown; do perl markdown.pl --html4tags $i > ${i%.*}.html; done; Of course, you need the Markdown script. | [
"bash",
"markdown"
] | 49 | 25 | 39,179 | 6 | 0 | 2008-08-20T20:33:28.423000 | 2008-08-20T21:03:20.397000 |
18,764 | 18,906 | What's the difference between a Table Scan and a Clustered Index Scan? | Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better? As an example - what's the performance difference between the following when there are many records?: declare @temp table( SomeColumn varchar(50) )
insert into @temp select 'SomeVal'
select * from @temp
-----------------------------
declare @temp table( RowID int not null identity(1,1) primary key, SomeColumn varchar(50) )
insert into @temp select 'SomeVal'
select * from @temp | In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a lookup into the Index Allocation Map. A clustered table, however, has it's data pages linked in a doubly linked list - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on INSERT, UPDATE, and DELETE. A heap table, however, requires a second write to the IAM. If your query has a RANGE operator (e.g.: SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100 ), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering. And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan. So: For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows. For a query with a WHERE clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table. For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal. For INSERT, UPDATE, and DELETE a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent. Microsoft has a whitepaper which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable): INSERT performance: clustered index wins by about 3% due to the second write needed for a heap. UPDATE performance: clustered index wins by about 8% due to the second lookup needed for a heap. DELETE performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap. single SELECT performance: clustered index wins by about 16% due to the second lookup needed for a heap. range SELECT performance: clustered index wins by about 29% due to the random ordering for a heap. concurrent INSERT: heap table wins by 30% under load due to page splits for the clustered index. | What's the difference between a Table Scan and a Clustered Index Scan? Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better? As an example - what's the performance difference between the following when there are many records?: declare @temp table( SomeColumn varchar(50) )
insert into @temp select 'SomeVal'
select * from @temp
-----------------------------
declare @temp table( RowID int not null identity(1,1) primary key, SomeColumn varchar(50) )
insert into @temp select 'SomeVal'
select * from @temp | TITLE:
What's the difference between a Table Scan and a Clustered Index Scan?
QUESTION:
Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better? As an example - what's the performance difference between the following when there are many records?: declare @temp table( SomeColumn varchar(50) )
insert into @temp select 'SomeVal'
select * from @temp
-----------------------------
declare @temp table( RowID int not null identity(1,1) primary key, SomeColumn varchar(50) )
insert into @temp select 'SomeVal'
select * from @temp
ANSWER:
In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a lookup into the Index Allocation Map. A clustered table, however, has it's data pages linked in a doubly linked list - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on INSERT, UPDATE, and DELETE. A heap table, however, requires a second write to the IAM. If your query has a RANGE operator (e.g.: SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100 ), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering. And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan. So: For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows. For a query with a WHERE clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table. For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal. For INSERT, UPDATE, and DELETE a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent. Microsoft has a whitepaper which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable): INSERT performance: clustered index wins by about 3% due to the second write needed for a heap. UPDATE performance: clustered index wins by about 8% due to the second lookup needed for a heap. DELETE performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap. single SELECT performance: clustered index wins by about 16% due to the second lookup needed for a heap. range SELECT performance: clustered index wins by about 29% due to the random ordering for a heap. concurrent INSERT: heap table wins by 30% under load due to page splits for the clustered index. | [
"sql",
"sql-server",
"indexing"
] | 76 | 88 | 45,204 | 3 | 0 | 2008-08-20T20:38:50.283000 | 2008-08-20T21:32:33.960000 |
18,765 | 18,786 | Importing C++ enumerations into C# | I'm currently working on creating a new C# project that needs to interact with an older C++ application. There is an error enumeration that already exists in the C++ app that I need to use in the C# app. I don't want to just re declare the enumeration in C# because that could cause sync issues down the line if the files aren't updated together. All that being said my question is this: Is there a way for me to taken an enumeration declared like so: typedef enum { eDEVICEINT_ERR_FATAL = 0x10001... } eDeviceIntErrCodes; and use it in a C# program like so: eDeviceIntErrCodes.eDEVICEINT_ERR_FATAL | Check out the PInvoke Interop Assistant tool http://www.codeplex.com/clrinterop/Release/ProjectReleases.aspx?ReleaseId=14120. Its a useful tool for generating PInvoke signatures for native methods. If I feed it your enum it generates this code. There is a command line version of the tool included so you could potentially build an automated process to keep the C# definition of the enum up to date whenever the C++ version changes. public enum eDeviceIntErrCodes { /// eDEVICEINT_ERR_FATAL -> 0x10001 eDEVICEINT_ERR_FATAL = 65537, } | Importing C++ enumerations into C# I'm currently working on creating a new C# project that needs to interact with an older C++ application. There is an error enumeration that already exists in the C++ app that I need to use in the C# app. I don't want to just re declare the enumeration in C# because that could cause sync issues down the line if the files aren't updated together. All that being said my question is this: Is there a way for me to taken an enumeration declared like so: typedef enum { eDEVICEINT_ERR_FATAL = 0x10001... } eDeviceIntErrCodes; and use it in a C# program like so: eDeviceIntErrCodes.eDEVICEINT_ERR_FATAL | TITLE:
Importing C++ enumerations into C#
QUESTION:
I'm currently working on creating a new C# project that needs to interact with an older C++ application. There is an error enumeration that already exists in the C++ app that I need to use in the C# app. I don't want to just re declare the enumeration in C# because that could cause sync issues down the line if the files aren't updated together. All that being said my question is this: Is there a way for me to taken an enumeration declared like so: typedef enum { eDEVICEINT_ERR_FATAL = 0x10001... } eDeviceIntErrCodes; and use it in a C# program like so: eDeviceIntErrCodes.eDEVICEINT_ERR_FATAL
ANSWER:
Check out the PInvoke Interop Assistant tool http://www.codeplex.com/clrinterop/Release/ProjectReleases.aspx?ReleaseId=14120. Its a useful tool for generating PInvoke signatures for native methods. If I feed it your enum it generates this code. There is a command line version of the tool included so you could potentially build an automated process to keep the C# definition of the enum up to date whenever the C++ version changes. public enum eDeviceIntErrCodes { /// eDEVICEINT_ERR_FATAL -> 0x10001 eDEVICEINT_ERR_FATAL = 65537, } | [
"c#",
"c++"
] | 17 | 12 | 16,400 | 5 | 0 | 2008-08-20T20:39:36.833000 | 2008-08-20T20:45:28.380000 |
18,772 | 18,796 | How to create a new instance of Sql Server 2005 | I forgot my password for Sql Server 2005. Windows Authentication is not enabled so I cannot login. How can I remove the current instance and create a new db instance? Or is there a better solution exists? | Assuming you are a member of the Windows Admininstrator group, you can put the server in Single User mode, you could try this - http://blogs.msdn.com/raulga/archive/2007/07/12/disaster-recovery-what-to-do-when-the-sa-account-password-is-lost-in-sql-server-2005.aspx | How to create a new instance of Sql Server 2005 I forgot my password for Sql Server 2005. Windows Authentication is not enabled so I cannot login. How can I remove the current instance and create a new db instance? Or is there a better solution exists? | TITLE:
How to create a new instance of Sql Server 2005
QUESTION:
I forgot my password for Sql Server 2005. Windows Authentication is not enabled so I cannot login. How can I remove the current instance and create a new db instance? Or is there a better solution exists?
ANSWER:
Assuming you are a member of the Windows Admininstrator group, you can put the server in Single User mode, you could try this - http://blogs.msdn.com/raulga/archive/2007/07/12/disaster-recovery-what-to-do-when-the-sa-account-password-is-lost-in-sql-server-2005.aspx | [
"sql-server",
"sql-server-2005"
] | 0 | 2 | 11,073 | 3 | 0 | 2008-08-20T20:42:04.643000 | 2008-08-20T20:49:25.703000 |
18,787 | 31,726 | Asp.net MVC User Control ViewData | When a controller renders a view based on a model you can get the properties from the ViewData collection using the indexer (ie. ViewData["Property"]). However, I have a shared user control that I tried to call using the following: return View("Message", new { DisplayMessage = "This is a test" }); and on my Message control I had this: <%= ViewData["DisplayMessage"] %> I would think this would render the DisplayMessage correctly, however, null is being returned. After a heavy dose of tinkering around, I finally created a "MessageData" class in order to strongly type my user control: public class MessageControl: ViewUserControl and now this call works: return View("Message", new MessageData() { DisplayMessage = "This is a test" }); and can be displayed like this: <%= ViewData.Model.DisplayMessage %> Why wouldn't the DisplayMessage property be added to the ViewData (ie. ViewData["DisplayMessage"]) collection without strong typing the user control? Is this by design? Wouldn't it make sense that ViewData would contain a key for "DisplayMessage"? | The method ViewData.Eval("DisplayMessage") should work for you. | Asp.net MVC User Control ViewData When a controller renders a view based on a model you can get the properties from the ViewData collection using the indexer (ie. ViewData["Property"]). However, I have a shared user control that I tried to call using the following: return View("Message", new { DisplayMessage = "This is a test" }); and on my Message control I had this: <%= ViewData["DisplayMessage"] %> I would think this would render the DisplayMessage correctly, however, null is being returned. After a heavy dose of tinkering around, I finally created a "MessageData" class in order to strongly type my user control: public class MessageControl: ViewUserControl and now this call works: return View("Message", new MessageData() { DisplayMessage = "This is a test" }); and can be displayed like this: <%= ViewData.Model.DisplayMessage %> Why wouldn't the DisplayMessage property be added to the ViewData (ie. ViewData["DisplayMessage"]) collection without strong typing the user control? Is this by design? Wouldn't it make sense that ViewData would contain a key for "DisplayMessage"? | TITLE:
Asp.net MVC User Control ViewData
QUESTION:
When a controller renders a view based on a model you can get the properties from the ViewData collection using the indexer (ie. ViewData["Property"]). However, I have a shared user control that I tried to call using the following: return View("Message", new { DisplayMessage = "This is a test" }); and on my Message control I had this: <%= ViewData["DisplayMessage"] %> I would think this would render the DisplayMessage correctly, however, null is being returned. After a heavy dose of tinkering around, I finally created a "MessageData" class in order to strongly type my user control: public class MessageControl: ViewUserControl and now this call works: return View("Message", new MessageData() { DisplayMessage = "This is a test" }); and can be displayed like this: <%= ViewData.Model.DisplayMessage %> Why wouldn't the DisplayMessage property be added to the ViewData (ie. ViewData["DisplayMessage"]) collection without strong typing the user control? Is this by design? Wouldn't it make sense that ViewData would contain a key for "DisplayMessage"?
ANSWER:
The method ViewData.Eval("DisplayMessage") should work for you. | [
"asp.net",
"asp.net-mvc",
"viewdata",
"viewusercontrol"
] | 7 | 6 | 5,252 | 2 | 0 | 2008-08-20T20:46:23.573000 | 2008-08-28T06:41:08.170000 |
18,803 | 18,839 | Is UML practical? | In college I've had numerous design and UML oriented courses, and I recognize that UML can be used to benefit a software project, especially use-case mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful? Edit: My experience is limited to small, under 10 developer projects. Edit: Many good answers, and though not the most verbose, I belive the one selected is the most balanced. | In a sufficiently complex system there are some places where some UML is considered useful. The useful diagrams for a system, vary by applicability. But the most widely used ones are: Class Diagrams State Diagrams Activity Diagrams Sequence Diagrams There are many enterprises who swear by them and many who outright reject them as an utter waste of time and effort. It's best not to go overboard and think what's best for the project you are on and pick the stuff that is applicable and makes sense. | Is UML practical? In college I've had numerous design and UML oriented courses, and I recognize that UML can be used to benefit a software project, especially use-case mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful? Edit: My experience is limited to small, under 10 developer projects. Edit: Many good answers, and though not the most verbose, I belive the one selected is the most balanced. | TITLE:
Is UML practical?
QUESTION:
In college I've had numerous design and UML oriented courses, and I recognize that UML can be used to benefit a software project, especially use-case mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful? Edit: My experience is limited to small, under 10 developer projects. Edit: Many good answers, and though not the most verbose, I belive the one selected is the most balanced.
ANSWER:
In a sufficiently complex system there are some places where some UML is considered useful. The useful diagrams for a system, vary by applicability. But the most widely used ones are: Class Diagrams State Diagrams Activity Diagrams Sequence Diagrams There are many enterprises who swear by them and many who outright reject them as an utter waste of time and effort. It's best not to go overboard and think what's best for the project you are on and pick the stuff that is applicable and makes sense. | [
"uml",
"class-design",
"diagram"
] | 127 | 50 | 34,868 | 31 | 0 | 2008-08-20T20:53:05.040000 | 2008-08-20T21:04:43.560000 |
18,836 | 18,837 | Why doesn't **find** find anything? | I'm looking for shell scripts files installed on my system, but find doesn't work: $ find /usr -name *.sh But I know there are a ton of scripts out there. For instance: $ ls /usr/local/lib/*.sh /usr/local/lib/tclConfig.sh /usr/local/lib/tkConfig.sh Why doesn't find work? | Try quoting the wildcard: $ find /usr -name \*.sh or: $ find /usr -name '*.sh' If you happen to have a file that matches *.sh in the current working directory, the wildcard will be expanded before find sees it. If you happen to have a file named tkConfig.sh in your working directory, the find command would expand to: $ find /usr -name tkConfig.sh which would only find files named tkConfig.sh. If you had more than one file that matches *.sh, you'd get a syntax error from find: $ cd /usr/local/lib $ find /usr -name *.sh find: bad option tkConfig.sh find: path-list predicate-list Again, the reason is that the wildcard expands to both files: $ find /usr -name tclConfig.sh tkConfig.sh Quoting the wildcard prevents it from being prematurely expanded. Another possibility is that /usr or one of its subdirectories is a symlink. find doesn't normally follow links, so you might need the -follow option: $ find /usr -follow -name '*.sh' | Why doesn't **find** find anything? I'm looking for shell scripts files installed on my system, but find doesn't work: $ find /usr -name *.sh But I know there are a ton of scripts out there. For instance: $ ls /usr/local/lib/*.sh /usr/local/lib/tclConfig.sh /usr/local/lib/tkConfig.sh Why doesn't find work? | TITLE:
Why doesn't **find** find anything?
QUESTION:
I'm looking for shell scripts files installed on my system, but find doesn't work: $ find /usr -name *.sh But I know there are a ton of scripts out there. For instance: $ ls /usr/local/lib/*.sh /usr/local/lib/tclConfig.sh /usr/local/lib/tkConfig.sh Why doesn't find work?
ANSWER:
Try quoting the wildcard: $ find /usr -name \*.sh or: $ find /usr -name '*.sh' If you happen to have a file that matches *.sh in the current working directory, the wildcard will be expanded before find sees it. If you happen to have a file named tkConfig.sh in your working directory, the find command would expand to: $ find /usr -name tkConfig.sh which would only find files named tkConfig.sh. If you had more than one file that matches *.sh, you'd get a syntax error from find: $ cd /usr/local/lib $ find /usr -name *.sh find: bad option tkConfig.sh find: path-list predicate-list Again, the reason is that the wildcard expands to both files: $ find /usr -name tclConfig.sh tkConfig.sh Quoting the wildcard prevents it from being prematurely expanded. Another possibility is that /usr or one of its subdirectories is a symlink. find doesn't normally follow links, so you might need the -follow option: $ find /usr -follow -name '*.sh' | [
"bash",
"unix",
"shell",
"ksh"
] | 17 | 54 | 3,664 | 3 | 0 | 2008-08-20T21:03:54.543000 | 2008-08-20T21:04:01.003000 |
18,861 | 18,884 | What did I do wrong here? [Javascript Regex] | So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work. | Your regex /^([a-zA-Z0-9_])+/ Looks for Start of string(check), followed by 1 or more letters, numbers, or underscore (check) And then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore If you put a $ at the end, then it will work - $ matches 'end of string', so the only way it can match is if there are only numbers, letters, and underscores between the start and end of the string. /^([a-zA-Z0-9_])+$/ Secondly, I'd suggest using document.getElementById('display-name').value instead of document.forms as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do' | What did I do wrong here? [Javascript Regex] So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work. | TITLE:
What did I do wrong here? [Javascript Regex]
QUESTION:
So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work.
ANSWER:
Your regex /^([a-zA-Z0-9_])+/ Looks for Start of string(check), followed by 1 or more letters, numbers, or underscore (check) And then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore If you put a $ at the end, then it will work - $ matches 'end of string', so the only way it can match is if there are only numbers, letters, and underscores between the start and end of the string. /^([a-zA-Z0-9_])+$/ Secondly, I'd suggest using document.getElementById('display-name').value instead of document.forms as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do' | [
"javascript",
"regex"
] | 4 | 14 | 684 | 8 | 0 | 2008-08-20T21:11:34.853000 | 2008-08-20T21:22:48.820000 |
18,889 | 18,902 | SQL 2008 Dialect Support for NHibernate | Is anyone working on or know if there exists a SQL 2k8 Dialect for NHibernate? | This was asked on the NHibernate Google Group recently - apparently the SQL 2005 dialect should work against SQL 2008. Definitive location of the dialects - source control is here, binary downloads are here. | SQL 2008 Dialect Support for NHibernate Is anyone working on or know if there exists a SQL 2k8 Dialect for NHibernate? | TITLE:
SQL 2008 Dialect Support for NHibernate
QUESTION:
Is anyone working on or know if there exists a SQL 2k8 Dialect for NHibernate?
ANSWER:
This was asked on the NHibernate Google Group recently - apparently the SQL 2005 dialect should work against SQL 2008. Definitive location of the dialects - source control is here, binary downloads are here. | [
"nhibernate",
"sql-server-2008"
] | 7 | 8 | 15,680 | 2 | 0 | 2008-08-20T21:25:07.940000 | 2008-08-20T21:29:21.690000 |
18,920 | 18,924 | Can you set, or where is, the local document root? | When opening a file from your hard drive into your browser, where is the document root? To illustrate, given the following HTML code, if the page is opened from the local machine (file:///) then where should the css file be for the browser to find it? | It depends on what browser you use, but Internet Explorer, for example, would take you to the root directory of your harddrive (eg. C:/ ), while browsers such as Firefox does nothing. | Can you set, or where is, the local document root? When opening a file from your hard drive into your browser, where is the document root? To illustrate, given the following HTML code, if the page is opened from the local machine (file:///) then where should the css file be for the browser to find it? | TITLE:
Can you set, or where is, the local document root?
QUESTION:
When opening a file from your hard drive into your browser, where is the document root? To illustrate, given the following HTML code, if the page is opened from the local machine (file:///) then where should the css file be for the browser to find it?
ANSWER:
It depends on what browser you use, but Internet Explorer, for example, would take you to the root directory of your harddrive (eg. C:/ ), while browsers such as Firefox does nothing. | [
"html",
"css",
"directory"
] | 10 | 2 | 13,453 | 6 | 0 | 2008-08-20T21:43:23.607000 | 2008-08-20T21:46:13.583000 |
18,932 | 18,949 | How can I remove duplicate rows? | I need to remove duplicate rows from a fairly large SQL Server table (i.e. 300,000+ rows). The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field. MyTable RowID int not null identity(1,1) primary key, Col1 varchar(20) not null, Col2 varchar(2048) not null, Col3 tinyint not null How can I do this? | Assuming no nulls, you GROUP BY the unique columns, and SELECT the MIN (or MAX) RowId as the row to keep. Then, just delete everything that didn't have a row id: DELETE FROM MyTable LEFT OUTER JOIN ( SELECT MIN(RowId) as RowId, Col1, Col2, Col3 FROM MyTable GROUP BY Col1, Col2, Col3 ) as KeepRows ON MyTable.RowId = KeepRows.RowId WHERE KeepRows.RowId IS NULL In case you have a GUID instead of an integer, you can replace MIN(RowId) with CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn))) | How can I remove duplicate rows? I need to remove duplicate rows from a fairly large SQL Server table (i.e. 300,000+ rows). The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field. MyTable RowID int not null identity(1,1) primary key, Col1 varchar(20) not null, Col2 varchar(2048) not null, Col3 tinyint not null How can I do this? | TITLE:
How can I remove duplicate rows?
QUESTION:
I need to remove duplicate rows from a fairly large SQL Server table (i.e. 300,000+ rows). The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field. MyTable RowID int not null identity(1,1) primary key, Col1 varchar(20) not null, Col2 varchar(2048) not null, Col3 tinyint not null How can I do this?
ANSWER:
Assuming no nulls, you GROUP BY the unique columns, and SELECT the MIN (or MAX) RowId as the row to keep. Then, just delete everything that didn't have a row id: DELETE FROM MyTable LEFT OUTER JOIN ( SELECT MIN(RowId) as RowId, Col1, Col2, Col3 FROM MyTable GROUP BY Col1, Col2, Col3 ) as KeepRows ON MyTable.RowId = KeepRows.RowId WHERE KeepRows.RowId IS NULL In case you have a GUID instead of an integer, you can replace MIN(RowId) with CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn))) | [
"sql-server",
"t-sql",
"duplicates"
] | 1,375 | 1,193 | 1,358,534 | 43 | 0 | 2008-08-20T21:51:29.780000 | 2008-08-20T22:00:00.667000 |
18,952 | 18,977 | What is your reporting tool of choice? | Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform. To get the job done what tools, widgets, platforms has the group used with success, frustration and failure? | For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive. For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users. I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters. Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They are just fishing and they need to actually spend as much time on THINKING about THEIR reporting requirements as YOU would have to spend BUILDING their solution. I've spent too much time building the "the system that can report everything" and for it to become out of date or out of favour before it was finished. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports. | What is your reporting tool of choice? Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform. To get the job done what tools, widgets, platforms has the group used with success, frustration and failure? | TITLE:
What is your reporting tool of choice?
QUESTION:
Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform. To get the job done what tools, widgets, platforms has the group used with success, frustration and failure?
ANSWER:
For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive. For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users. I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters. Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They are just fishing and they need to actually spend as much time on THINKING about THEIR reporting requirements as YOU would have to spend BUILDING their solution. I've spent too much time building the "the system that can report everything" and for it to become out of date or out of favour before it was finished. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports. | [
"sql",
"reporting",
"business-intelligence"
] | 19 | 29 | 24,688 | 23 | 0 | 2008-08-20T22:01:16.513000 | 2008-08-20T22:21:45.777000 |
18,955 | 20,255 | Disabling multi-line fields in MS Access | Is there a way to disable entering multi-line entries in a Text Box (i.e., I'd like to stop my users from doing ctrl-enter to get a newline)? | I was able to do it on using KeyPress event. Here's the code example: Private Sub SingleLineTextBox_ KeyPress(ByRef KeyAscii As Integer) If KeyAscii = 10 _ or KeyAscii = 13 Then '10 -> Ctrl-Enter. AKA ^J or ctrl-j '13 -> Enter. AKA ^M or ctrl-m KeyAscii = 0 'clear the the KeyPress End If End Sub | Disabling multi-line fields in MS Access Is there a way to disable entering multi-line entries in a Text Box (i.e., I'd like to stop my users from doing ctrl-enter to get a newline)? | TITLE:
Disabling multi-line fields in MS Access
QUESTION:
Is there a way to disable entering multi-line entries in a Text Box (i.e., I'd like to stop my users from doing ctrl-enter to get a newline)?
ANSWER:
I was able to do it on using KeyPress event. Here's the code example: Private Sub SingleLineTextBox_ KeyPress(ByRef KeyAscii As Integer) If KeyAscii = 10 _ or KeyAscii = 13 Then '10 -> Ctrl-Enter. AKA ^J or ctrl-j '13 -> Enter. AKA ^M or ctrl-m KeyAscii = 0 'clear the the KeyPress End If End Sub | [
"ms-access"
] | 3 | 5 | 7,522 | 7 | 0 | 2008-08-20T22:02:38.880000 | 2008-08-21T15:32:53.083000 |
18,959 | 18,975 | How to match linux device path to windows drive name? | I'm writing an application that on some stage performs low-level disk operations in Linux environment. The app actually consists of 2 parts, one runs on Windows and interacts with a user and another is a linux part that runs from a LiveCD. User makes a choice of Windows drive letters and then a linux part performs actions with corresponding partitions. The problem is finding a match between a Windows drive letter (like C:) and a linux device name (like /dev/sda1). This is my current solution that I rate as ugly: store partitions information (i.e. drive letter, number of blocks, drive serial number etc.) in Windows in some pre-defined place (i.e. the root of the system partition). read a list of partitions from /proc/partitions. Get only those partitions that has major number for SCSI or IDE hard drives and minor number that identifies them as real partitions and not the whole disks. Try to mount each of them with either ntfs or vfat file systems. Check whether the mounted partition contains the information stored by Windows app. Upon finding the required information written by the Windows app make the actual match. For each partition found in /proc/partitions acquire drive serial number (via HDIO_GET_IDENTITY syscall), number of blocks (from /proc/partitions) and drive offset (/sys/blocks/drive_path/partition_name/start), compare this to the Windows information and if this matches - store a Windows drive letter along with a linux device name. There are a couple of problems in this scheme: This is ugly. Writing data in Windows and then reading it in Linux makes testing a nightmare. linux device major number is compared only with IDE or SCSI devices. This would probably fail, i.e. on USB or FireWire disks. It's possible to add these types of disks, but limiting the app to only known subset of possible devices seems to be rather bad idea. looks like HDIO_GET_IDENTITY works only on IDE and SATA drives. /sys/block hack may not work on other than IDE or SATA drives. Any ideas on how to improve this schema? Perhaps there is another way to determine windows names without writing all the data in windows app? P.S. The language of the app is C++. I can't change this. | Partitions have UUIDs associated with them. I don't know how to find these in Windows but in linux you can find the UUID for each partition with: sudo vol_id -u device (e.g. /dev/sda1) If there is an equivilent function in Windows you could simply store the UUIDs for whatever partition they pick then iterate through all known partitions in linux and match the UUIDs. Edit: This may be a linux-only thing, and it may speficially be the volid util that generates these from something (instead of reading off meta-data for the drive). Having said that, there is nothing stopping you getting the source for volid and checking out what it does. | How to match linux device path to windows drive name? I'm writing an application that on some stage performs low-level disk operations in Linux environment. The app actually consists of 2 parts, one runs on Windows and interacts with a user and another is a linux part that runs from a LiveCD. User makes a choice of Windows drive letters and then a linux part performs actions with corresponding partitions. The problem is finding a match between a Windows drive letter (like C:) and a linux device name (like /dev/sda1). This is my current solution that I rate as ugly: store partitions information (i.e. drive letter, number of blocks, drive serial number etc.) in Windows in some pre-defined place (i.e. the root of the system partition). read a list of partitions from /proc/partitions. Get only those partitions that has major number for SCSI or IDE hard drives and minor number that identifies them as real partitions and not the whole disks. Try to mount each of them with either ntfs or vfat file systems. Check whether the mounted partition contains the information stored by Windows app. Upon finding the required information written by the Windows app make the actual match. For each partition found in /proc/partitions acquire drive serial number (via HDIO_GET_IDENTITY syscall), number of blocks (from /proc/partitions) and drive offset (/sys/blocks/drive_path/partition_name/start), compare this to the Windows information and if this matches - store a Windows drive letter along with a linux device name. There are a couple of problems in this scheme: This is ugly. Writing data in Windows and then reading it in Linux makes testing a nightmare. linux device major number is compared only with IDE or SCSI devices. This would probably fail, i.e. on USB or FireWire disks. It's possible to add these types of disks, but limiting the app to only known subset of possible devices seems to be rather bad idea. looks like HDIO_GET_IDENTITY works only on IDE and SATA drives. /sys/block hack may not work on other than IDE or SATA drives. Any ideas on how to improve this schema? Perhaps there is another way to determine windows names without writing all the data in windows app? P.S. The language of the app is C++. I can't change this. | TITLE:
How to match linux device path to windows drive name?
QUESTION:
I'm writing an application that on some stage performs low-level disk operations in Linux environment. The app actually consists of 2 parts, one runs on Windows and interacts with a user and another is a linux part that runs from a LiveCD. User makes a choice of Windows drive letters and then a linux part performs actions with corresponding partitions. The problem is finding a match between a Windows drive letter (like C:) and a linux device name (like /dev/sda1). This is my current solution that I rate as ugly: store partitions information (i.e. drive letter, number of blocks, drive serial number etc.) in Windows in some pre-defined place (i.e. the root of the system partition). read a list of partitions from /proc/partitions. Get only those partitions that has major number for SCSI or IDE hard drives and minor number that identifies them as real partitions and not the whole disks. Try to mount each of them with either ntfs or vfat file systems. Check whether the mounted partition contains the information stored by Windows app. Upon finding the required information written by the Windows app make the actual match. For each partition found in /proc/partitions acquire drive serial number (via HDIO_GET_IDENTITY syscall), number of blocks (from /proc/partitions) and drive offset (/sys/blocks/drive_path/partition_name/start), compare this to the Windows information and if this matches - store a Windows drive letter along with a linux device name. There are a couple of problems in this scheme: This is ugly. Writing data in Windows and then reading it in Linux makes testing a nightmare. linux device major number is compared only with IDE or SCSI devices. This would probably fail, i.e. on USB or FireWire disks. It's possible to add these types of disks, but limiting the app to only known subset of possible devices seems to be rather bad idea. looks like HDIO_GET_IDENTITY works only on IDE and SATA drives. /sys/block hack may not work on other than IDE or SATA drives. Any ideas on how to improve this schema? Perhaps there is another way to determine windows names without writing all the data in windows app? P.S. The language of the app is C++. I can't change this.
ANSWER:
Partitions have UUIDs associated with them. I don't know how to find these in Windows but in linux you can find the UUID for each partition with: sudo vol_id -u device (e.g. /dev/sda1) If there is an equivilent function in Windows you could simply store the UUIDs for whatever partition they pick then iterate through all known partitions in linux and match the UUIDs. Edit: This may be a linux-only thing, and it may speficially be the volid util that generates these from something (instead of reading off meta-data for the drive). Having said that, there is nothing stopping you getting the source for volid and checking out what it does. | [
"c++",
"linux",
"drives"
] | 3 | 1 | 5,418 | 6 | 0 | 2008-08-20T22:06:11.120000 | 2008-08-20T22:20:05.093000 |
18,984 | 19,179 | What do you think of developing for the command line first? | What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods? eg. W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00" loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database. Eventually you add in a screen for this: ============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================ When you click submit, it calls the same AddTask function. Is this considered: a good way to code just for the newbies horrendous!. Addendum: I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves? Why not just call the same executable in different ways: "todo /G" when you want the full-on graphical interface "todo /I" for an interactive prompt within todo.exe (scripting, etc) plain old "todo " when you just want to do one thing and be done with it. Addendum 2: It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something." Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library. Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience. | I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS. Also, with a library you can easily do unit tests for the functionality. But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation. | What do you think of developing for the command line first? What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods? eg. W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00" loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database. Eventually you add in a screen for this: ============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================ When you click submit, it calls the same AddTask function. Is this considered: a good way to code just for the newbies horrendous!. Addendum: I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves? Why not just call the same executable in different ways: "todo /G" when you want the full-on graphical interface "todo /I" for an interactive prompt within todo.exe (scripting, etc) plain old "todo " when you just want to do one thing and be done with it. Addendum 2: It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something." Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library. Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience. | TITLE:
What do you think of developing for the command line first?
QUESTION:
What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods? eg. W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00" loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database. Eventually you add in a screen for this: ============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================ When you click submit, it calls the same AddTask function. Is this considered: a good way to code just for the newbies horrendous!. Addendum: I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves? Why not just call the same executable in different ways: "todo /G" when you want the full-on graphical interface "todo /I" for an interactive prompt within todo.exe (scripting, etc) plain old "todo " when you just want to do one thing and be done with it. Addendum 2: It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something." Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library. Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience.
ANSWER:
I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS. Also, with a library you can easily do unit tests for the functionality. But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation. | [
"language-agnostic",
"command-line"
] | 22 | 16 | 1,677 | 21 | 0 | 2008-08-20T22:27:56.120000 | 2008-08-21T01:08:37 |
18,985 | 27,343 | How can I beautify JavaScript code using Command Line? | I am writing a batch script in order to beautify JavaScript code. It needs to work on both Windows and Linux. How can I beautify JavaScript code using the command line tools? | First, pick your favorite Javascript based Pretty Print/Beautifier. I prefer the one at http://jsbeautifier.org/, because it's what I found first. Downloads its file https://github.com/beautify-web/js-beautify/blob/master/js/lib/beautify.js Second, download and install The Mozilla group's Java based Javascript engine, Rhino. "Install" is a little bit misleading; Download the zip file, extract everything, place js.jar in your Java classpath (or Library/Java/Extensions on OS X). You can then run scripts with an invocation similar to this java -cp js.jar org.mozilla.javascript.tools.shell.Main name-of-script.js Use the Pretty Print/Beautifier from step 1 to write a small shell script that will read in your javascript file and run it through the Pretty Print/Beautifier from step one. For example //original code (function() {... js_beautify code... }());
//new code print(global.js_beautify(readFile(arguments[0]))); Rhino gives javascript a few extra useful functions that don't necessarily make sense in a browser context, but do in a console context. The function print does what you'd expect, and prints out a string. The function readFile accepts a file path string as an argument and returns the contents of that file. You'd invoke the above something like java -cp js.jar org.mozilla.javascript.tools.shell.Main beautify.js file-to-pp.js You can mix and match Java and Javascript in your Rhino run scripts, so if you know a little Java it shouldn't be too hard to get this running with text-streams as well. | How can I beautify JavaScript code using Command Line? I am writing a batch script in order to beautify JavaScript code. It needs to work on both Windows and Linux. How can I beautify JavaScript code using the command line tools? | TITLE:
How can I beautify JavaScript code using Command Line?
QUESTION:
I am writing a batch script in order to beautify JavaScript code. It needs to work on both Windows and Linux. How can I beautify JavaScript code using the command line tools?
ANSWER:
First, pick your favorite Javascript based Pretty Print/Beautifier. I prefer the one at http://jsbeautifier.org/, because it's what I found first. Downloads its file https://github.com/beautify-web/js-beautify/blob/master/js/lib/beautify.js Second, download and install The Mozilla group's Java based Javascript engine, Rhino. "Install" is a little bit misleading; Download the zip file, extract everything, place js.jar in your Java classpath (or Library/Java/Extensions on OS X). You can then run scripts with an invocation similar to this java -cp js.jar org.mozilla.javascript.tools.shell.Main name-of-script.js Use the Pretty Print/Beautifier from step 1 to write a small shell script that will read in your javascript file and run it through the Pretty Print/Beautifier from step one. For example //original code (function() {... js_beautify code... }());
//new code print(global.js_beautify(readFile(arguments[0]))); Rhino gives javascript a few extra useful functions that don't necessarily make sense in a browser context, but do in a console context. The function print does what you'd expect, and prints out a string. The function readFile accepts a file path string as an argument and returns the contents of that file. You'd invoke the above something like java -cp js.jar org.mozilla.javascript.tools.shell.Main beautify.js file-to-pp.js You can mix and match Java and Javascript in your Rhino run scripts, so if you know a little Java it shouldn't be too hard to get this running with text-streams as well. | [
"javascript",
"command-line-interface",
"pretty-print",
"multiplatform"
] | 136 | 72 | 107,561 | 10 | 0 | 2008-08-20T22:29:22.203000 | 2008-08-26T03:22:40.553000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.