question_id
int64
4
6.31M
answer_id
int64
7
6.31M
title
stringlengths
9
150
question_body
stringlengths
0
28.8k
answer_body
stringlengths
60
27.2k
question_text
stringlengths
40
28.9k
combined_text
stringlengths
124
39.6k
tags
listlengths
1
6
question_score
int64
0
26.3k
answer_score
int64
0
28.8k
view_count
int64
15
14M
answer_count
int64
0
182
favorite_count
int64
0
32
question_creation_date
stringdate
2008-07-31 21:42:52
2011-06-10 18:12:18
answer_creation_date
stringdate
2008-07-31 22:17:57
2011-06-10 18:14:17
53,609
566,486
What does 'foo' really mean?
I hope this qualifies as a programming question, as in any programming tutorial, you eventually come across 'foo' in the code examples. (yeah, right?) what does 'foo' really mean? If it is meant to mean nothing, when did it begin to be used so?
See: RFC 3092: Etymology of "Foo", D. Eastlake 3rd et al. Quoting only the relevant definitions from that RFC for brevity: Used very generally as a sample name for absolutely anything, esp. programs and files (esp. scratch files). First on the standard list of metasyntactic variables used in syntax examples (bar, baz, qux, quux, corge, grault, garply, waldo, fred, plugh, xyzzy, thud). [JARGON]
What does 'foo' really mean? I hope this qualifies as a programming question, as in any programming tutorial, you eventually come across 'foo' in the code examples. (yeah, right?) what does 'foo' really mean? If it is meant to mean nothing, when did it begin to be used so?
TITLE: What does 'foo' really mean? QUESTION: I hope this qualifies as a programming question, as in any programming tutorial, you eventually come across 'foo' in the code examples. (yeah, right?) what does 'foo' really mean? If it is meant to mean nothing, when did it begin to be used so? ANSWER: See: RFC 3092: Etymology of "Foo", D. Eastlake 3rd et al. Quoting only the relevant definitions from that RFC for brevity: Used very generally as a sample name for absolutely anything, esp. programs and files (esp. scratch files). First on the standard list of metasyntactic variables used in syntax examples (bar, baz, qux, quux, corge, grault, garply, waldo, fred, plugh, xyzzy, thud). [JARGON]
[ "language-agnostic", "coding-style" ]
235
139
213,673
9
0
2008-09-10T08:21:02.223000
2009-02-19T18:09:28.287000
53,610
53,711
How to properly link your a custom css file in sharepoint
I've created a custom list, and made some changes to the way the CQWP renders it on a page by modifying ItemStyle.xsl. However, I'd like to use some custom css classes and therefore I'd like to link to my own custom.css file from the head tag of the pages containing this CQWP. So my question is, where to do put my.css file and how do I link it properly to a page containing the CQWPs. Please have in mind that I'm making a solution that should be deployed on multi sharepoint installations. Thanks.
The microsoft official way is just to copy them into the relevant folders (as seen by downloading their template packs). However, you could also create your own site definition and add the items to the correct libraries and lists in the same way that the master pages are added. If you are going to deploy CSS and Master Pages through features remember you will have to activate your the publishing infrastructure on the site collection and the publishing feature on the site. To deploy a master page/page layout as a feature you should follow the steps at the site below, you can use the "fileurl" element to specify your CSS and place it into the correct folder (style library, for example): http://www.sharepointnutsandbolts.com/2007/04/deploying-master-pages-and-page-layouts.html
How to properly link your a custom css file in sharepoint I've created a custom list, and made some changes to the way the CQWP renders it on a page by modifying ItemStyle.xsl. However, I'd like to use some custom css classes and therefore I'd like to link to my own custom.css file from the head tag of the pages containing this CQWP. So my question is, where to do put my.css file and how do I link it properly to a page containing the CQWPs. Please have in mind that I'm making a solution that should be deployed on multi sharepoint installations. Thanks.
TITLE: How to properly link your a custom css file in sharepoint QUESTION: I've created a custom list, and made some changes to the way the CQWP renders it on a page by modifying ItemStyle.xsl. However, I'd like to use some custom css classes and therefore I'd like to link to my own custom.css file from the head tag of the pages containing this CQWP. So my question is, where to do put my.css file and how do I link it properly to a page containing the CQWPs. Please have in mind that I'm making a solution that should be deployed on multi sharepoint installations. Thanks. ANSWER: The microsoft official way is just to copy them into the relevant folders (as seen by downloading their template packs). However, you could also create your own site definition and add the items to the correct libraries and lists in the same way that the master pages are added. If you are going to deploy CSS and Master Pages through features remember you will have to activate your the publishing infrastructure on the site collection and the publishing feature on the site. To deploy a master page/page layout as a feature you should follow the steps at the site below, you can use the "fileurl" element to specify your CSS and place it into the correct folder (style library, for example): http://www.sharepointnutsandbolts.com/2007/04/deploying-master-pages-and-page-layouts.html
[ "css", "sharepoint" ]
2
3
8,475
2
0
2008-09-10T08:21:16.980000
2008-09-10T09:44:59.977000
53,618
53,799
How to control IIS 5.1 from command line?
I found some informations about controlling IIS 5.1 from command line via adsutil.vbs ( http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d3df4bc9-0954-459a-b5e6-7a8bc462960c.mspx?mfr=true ). The utility is available at c:\InetPub\AdminScripts. The utility throw only errors like the following: ErrNumber: -2147463164 (0x80005004) Error Trying To GET the Schema of the property: IIS://localhost/Schema/ROOT Can you tell me, how to check if there exists a virtual directory and create it, if it does not exist?
Hope this helps you. http://www.codeproject.com/KB/system/commandlineweb.aspx
How to control IIS 5.1 from command line? I found some informations about controlling IIS 5.1 from command line via adsutil.vbs ( http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d3df4bc9-0954-459a-b5e6-7a8bc462960c.mspx?mfr=true ). The utility is available at c:\InetPub\AdminScripts. The utility throw only errors like the following: ErrNumber: -2147463164 (0x80005004) Error Trying To GET the Schema of the property: IIS://localhost/Schema/ROOT Can you tell me, how to check if there exists a virtual directory and create it, if it does not exist?
TITLE: How to control IIS 5.1 from command line? QUESTION: I found some informations about controlling IIS 5.1 from command line via adsutil.vbs ( http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d3df4bc9-0954-459a-b5e6-7a8bc462960c.mspx?mfr=true ). The utility is available at c:\InetPub\AdminScripts. The utility throw only errors like the following: ErrNumber: -2147463164 (0x80005004) Error Trying To GET the Schema of the property: IIS://localhost/Schema/ROOT Can you tell me, how to check if there exists a virtual directory and create it, if it does not exist? ANSWER: Hope this helps you. http://www.codeproject.com/KB/system/commandlineweb.aspx
[ "iis", "windows-xp" ]
0
0
3,007
2
0
2008-09-10T08:26:32.070000
2008-09-10T11:31:32.890000
53,621
176,457
Why is the 'Resource Pool' not supported on the current platform?
What could be the possible causes for the following exception? System.PlatformNotSupportedException: 'ResourcePool' is not supported on the current platform. at System.EnterpriseServices.Platform.Assert(Boolean fSuccess, String function) at System.EnterpriseServices.Platform.Assert(Version platform, String function) at System.EnterpriseServices.ResourcePool..ctor(TransactionEndDelegate cb) at System.Data.SqlClient.ConnectionPool..ctor(DefaultPoolControl ctrl) at System.Data.SqlClient.PoolManager.FindOrCreatePool(DefaultPoolControl ctrl) at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction) at System.Data.SqlClient.SqlConnection.Open() The platform is Windows 2003 Server SP2. The same code has been tested on Windows XP SP2 without any problems. However, it would be interesting to know what reasons cause this exception regardless of the platform.
I've poked at the sources using Reflector and I can't seem to find any call to Platform.Assert in the static constructor of ResourcePool. Is the Windows 2003 server 64bit? That may be the problem.
Why is the 'Resource Pool' not supported on the current platform? What could be the possible causes for the following exception? System.PlatformNotSupportedException: 'ResourcePool' is not supported on the current platform. at System.EnterpriseServices.Platform.Assert(Boolean fSuccess, String function) at System.EnterpriseServices.Platform.Assert(Version platform, String function) at System.EnterpriseServices.ResourcePool..ctor(TransactionEndDelegate cb) at System.Data.SqlClient.ConnectionPool..ctor(DefaultPoolControl ctrl) at System.Data.SqlClient.PoolManager.FindOrCreatePool(DefaultPoolControl ctrl) at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction) at System.Data.SqlClient.SqlConnection.Open() The platform is Windows 2003 Server SP2. The same code has been tested on Windows XP SP2 without any problems. However, it would be interesting to know what reasons cause this exception regardless of the platform.
TITLE: Why is the 'Resource Pool' not supported on the current platform? QUESTION: What could be the possible causes for the following exception? System.PlatformNotSupportedException: 'ResourcePool' is not supported on the current platform. at System.EnterpriseServices.Platform.Assert(Boolean fSuccess, String function) at System.EnterpriseServices.Platform.Assert(Version platform, String function) at System.EnterpriseServices.ResourcePool..ctor(TransactionEndDelegate cb) at System.Data.SqlClient.ConnectionPool..ctor(DefaultPoolControl ctrl) at System.Data.SqlClient.PoolManager.FindOrCreatePool(DefaultPoolControl ctrl) at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction) at System.Data.SqlClient.SqlConnection.Open() The platform is Windows 2003 Server SP2. The same code has been tested on Windows XP SP2 without any problems. However, it would be interesting to know what reasons cause this exception regardless of the platform. ANSWER: I've poked at the sources using Reflector and I can't seem to find any call to Platform.Assert in the static constructor of ResourcePool. Is the Windows 2003 server 64bit? That may be the problem.
[ ".net", "sql-server", "connection-pooling" ]
1
1
745
1
0
2008-09-10T08:27:14.843000
2008-10-06T22:27:17.093000
53,623
1,067,587
How to get whois information of a domain name in my program?
I want to get whois information of a domain name from my c#/java programs. Is there a simple way to do this?
I found a perfect C# example on dotnet-snippets.com ( which doesn't exist anymore ). It's 11 lines of code to copy and paste straight into your own application. /// /// Gets the whois information. /// /// The whois server. /// The URL. /// private string GetWhoisInformation(string whoisServer, string url) { StringBuilder stringBuilderResult = new StringBuilder(); TcpClient tcpClinetWhois = new TcpClient(whoisServer, 43); NetworkStream networkStreamWhois = tcpClinetWhois.GetStream(); BufferedStream bufferedStreamWhois = new BufferedStream(networkStreamWhois); StreamWriter streamWriter = new StreamWriter(bufferedStreamWhois); streamWriter.WriteLine(url); streamWriter.Flush(); StreamReader streamReaderReceive = new StreamReader(bufferedStreamWhois); while (!streamReaderReceive.EndOfStream) stringBuilderResult.AppendLine(streamReaderReceive.ReadLine()); return stringBuilderResult.ToString(); }
How to get whois information of a domain name in my program? I want to get whois information of a domain name from my c#/java programs. Is there a simple way to do this?
TITLE: How to get whois information of a domain name in my program? QUESTION: I want to get whois information of a domain name from my c#/java programs. Is there a simple way to do this? ANSWER: I found a perfect C# example on dotnet-snippets.com ( which doesn't exist anymore ). It's 11 lines of code to copy and paste straight into your own application. /// /// Gets the whois information. /// /// The whois server. /// The URL. /// private string GetWhoisInformation(string whoisServer, string url) { StringBuilder stringBuilderResult = new StringBuilder(); TcpClient tcpClinetWhois = new TcpClient(whoisServer, 43); NetworkStream networkStreamWhois = tcpClinetWhois.GetStream(); BufferedStream bufferedStreamWhois = new BufferedStream(networkStreamWhois); StreamWriter streamWriter = new StreamWriter(bufferedStreamWhois); streamWriter.WriteLine(url); streamWriter.Flush(); StreamReader streamReaderReceive = new StreamReader(bufferedStreamWhois); while (!streamReaderReceive.EndOfStream) stringBuilderResult.AppendLine(streamReaderReceive.ReadLine()); return stringBuilderResult.ToString(); }
[ "c#", "java", "whois", "domain-name" ]
3
10
15,216
7
0
2008-09-10T08:27:45.920000
2009-07-01T06:37:04.757000
53,629
53,644
History of changes to a particular line of code in Subversion
Is it possible to see the history of changes to a particular line of code in a Subversion repository? I'd like, for instance, to be able to see when a particular statement was added or when that statement was changed, even if its line number is not the same any more.
I don't know a method for tracking statements through time in Subversion. It is simple however to see when any particular line in a file was last changed using svn blame. Check the SVNBook: svn blame reference: Synopsis svn blame TARGET[@REV]... Description Show author and revision information in-line for the specified files or URLs. Each line of text is annotated at the beginning with the author (username) and the revision number for the last change to that line.
History of changes to a particular line of code in Subversion Is it possible to see the history of changes to a particular line of code in a Subversion repository? I'd like, for instance, to be able to see when a particular statement was added or when that statement was changed, even if its line number is not the same any more.
TITLE: History of changes to a particular line of code in Subversion QUESTION: Is it possible to see the history of changes to a particular line of code in a Subversion repository? I'd like, for instance, to be able to see when a particular statement was added or when that statement was changed, even if its line number is not the same any more. ANSWER: I don't know a method for tracking statements through time in Subversion. It is simple however to see when any particular line in a file was last changed using svn blame. Check the SVNBook: svn blame reference: Synopsis svn blame TARGET[@REV]... Description Show author and revision information in-line for the specified files or URLs. Each line of text is annotated at the beginning with the author (username) and the revision number for the last change to that line.
[ "svn" ]
80
59
45,446
11
0
2008-09-10T08:32:20.870000
2008-09-10T08:40:18.683000
53,652
53,656
DataTable to readable text string
This might be a bit on the silly side of things but I need to send the contents of a DataTable (unknown columns, unknown contents) via a text e-mail. Basic idea is to loop over rows and columns and output all cell contents into a StringBuilder using.ToString(). Formatting is a big issue though. Any tips/ideas on how to make this look "readable" in a text format? I'm thinking on "padding" each cell with empty spaces, but I also need to split some cells into multiple lines, and this makes the StringBuilder approach a bit messy ( because the second line of text from the first column comes after the first line of text in the last column,etc.)
Would converting the datatable to a HTML-table and sending HTML-mail be an alternative? That would make it much nicer on the receiving end if their client supports it.
DataTable to readable text string This might be a bit on the silly side of things but I need to send the contents of a DataTable (unknown columns, unknown contents) via a text e-mail. Basic idea is to loop over rows and columns and output all cell contents into a StringBuilder using.ToString(). Formatting is a big issue though. Any tips/ideas on how to make this look "readable" in a text format? I'm thinking on "padding" each cell with empty spaces, but I also need to split some cells into multiple lines, and this makes the StringBuilder approach a bit messy ( because the second line of text from the first column comes after the first line of text in the last column,etc.)
TITLE: DataTable to readable text string QUESTION: This might be a bit on the silly side of things but I need to send the contents of a DataTable (unknown columns, unknown contents) via a text e-mail. Basic idea is to loop over rows and columns and output all cell contents into a StringBuilder using.ToString(). Formatting is a big issue though. Any tips/ideas on how to make this look "readable" in a text format? I'm thinking on "padding" each cell with empty spaces, but I also need to split some cells into multiple lines, and this makes the StringBuilder approach a bit messy ( because the second line of text from the first column comes after the first line of text in the last column,etc.) ANSWER: Would converting the datatable to a HTML-table and sending HTML-mail be an alternative? That would make it much nicer on the receiving end if their client supports it.
[ "c#", "formatting", "datatable" ]
2
1
1,633
7
0
2008-09-10T08:56:27.130000
2008-09-10T08:58:10.637000
53,664
53,668
How to effectively work with multiple files in Vim
I've started using Vim to develop Perl scripts and am starting to find it very powerful. One thing I like is to be able to open multiple files at once with: vi main.pl maintenance.pl and then hop between them with::n:prev and see which file are open with::args And to add a file, I can say::n test.pl which I expect would then be added to my list of files, but instead it wipes out my current file list and when I type:args I only have test.pl open. So how can I add and remove files in my args list?
Why not use tabs (introduced in Vim 7)? You can switch between tabs with:tabn and:tabp, With:tabe you can add a new tab; and with a regular:q or:wq you close a tab. If you map:tabn and:tabp to your F7 / F8 keys you can easily switch between files. If there are not that many files or you don't have Vim 7 you can also split your screen in multiple files::sp. Then you can switch between splitscreens with Ctrl + W and then an arrow key in the direction you want to move (or instead of arrow keys, w for next and W for previous splitscreen)
How to effectively work with multiple files in Vim I've started using Vim to develop Perl scripts and am starting to find it very powerful. One thing I like is to be able to open multiple files at once with: vi main.pl maintenance.pl and then hop between them with::n:prev and see which file are open with::args And to add a file, I can say::n test.pl which I expect would then be added to my list of files, but instead it wipes out my current file list and when I type:args I only have test.pl open. So how can I add and remove files in my args list?
TITLE: How to effectively work with multiple files in Vim QUESTION: I've started using Vim to develop Perl scripts and am starting to find it very powerful. One thing I like is to be able to open multiple files at once with: vi main.pl maintenance.pl and then hop between them with::n:prev and see which file are open with::args And to add a file, I can say::n test.pl which I expect would then be added to my list of files, but instead it wipes out my current file list and when I type:args I only have test.pl open. So how can I add and remove files in my args list? ANSWER: Why not use tabs (introduced in Vim 7)? You can switch between tabs with:tabn and:tabp, With:tabe you can add a new tab; and with a regular:q or:wq you close a tab. If you map:tabn and:tabp to your F7 / F8 keys you can easily switch between files. If there are not that many files or you don't have Vim 7 you can also split your screen in multiple files::sp. Then you can switch between splitscreens with Ctrl + W and then an arrow key in the direction you want to move (or instead of arrow keys, w for next and W for previous splitscreen)
[ "vim", "vi" ]
1,234
1,295
698,570
29
0
2008-09-10T09:03:49.733000
2008-09-10T09:09:32.840000
53,676
53,684
How to resolve ORA-011033: ORACLE initialization or shutdown in progress
When trying to connect to an ORACLE user via TOAD (Quest Software) or any other means ( Oracle Enterprise Manager ) I get this error: ORA-011033: ORACLE initialization or shutdown in progress
After some googling, I found the advice to do the following, and it worked: SQL> startup mount ORACLE Instance started SQL> recover database Media recovery complete SQL> alter database open; Database altered
How to resolve ORA-011033: ORACLE initialization or shutdown in progress When trying to connect to an ORACLE user via TOAD (Quest Software) or any other means ( Oracle Enterprise Manager ) I get this error: ORA-011033: ORACLE initialization or shutdown in progress
TITLE: How to resolve ORA-011033: ORACLE initialization or shutdown in progress QUESTION: When trying to connect to an ORACLE user via TOAD (Quest Software) or any other means ( Oracle Enterprise Manager ) I get this error: ORA-011033: ORACLE initialization or shutdown in progress ANSWER: After some googling, I found the advice to do the following, and it worked: SQL> startup mount ORACLE Instance started SQL> recover database Media recovery complete SQL> alter database open; Database altered
[ "database", "oracle" ]
52
103
452,463
9
0
2008-09-10T09:14:47.617000
2008-09-10T09:22:14.043000
53,693
54,009
Amazon SimpleDB
Has anyone considered using something along the lines of the Amazon SimpleDB data store as their backend database? SQL Server hosting (at least in the UK) is expensive so could something like this along with cloud file storage (S3) be used for building apps that could grow with your application. Great in theory but would anyone consider using it. In fact is anyone actually using it now for real production software as I would love to read your comments.
This is a good analysis of Amazon services from Dare. S3 handled what I've typically heard described as "blob storage". A typical Web application typically has media files and other resources (images, CSS stylesheets, scripts, video files, etc) that is simply accessed by name/path. However a lot of these resources also have metadata (e.g. a video file on YouTube has metadata about it's rating, who uploaded it, number of views, etc) which need to be stored as well. This need for queryable, schematized storage is where SimpleDB comes in. EC2 provides a virtual server that can be used for computation complete with a local file system instance which isn't persistent if the virtual server goes down for any reason. With SimpleDB and S3 you have the building blocks to build a large class of "Web 2.0" style applications when you throw in the computational capabilities provided by EC2. However neither S3 nor SimpleDB provides a solution for a developer who simply wants the typical LAMP or WISC developer experience of building a database driven Web application or for applications that may have custom storage needs that don't fit neatly into the buckets of blob storage or schematized storage. Without access to a persistent filesystem, developers on Amazon's cloud computing platform have had to come up with sophisticated solutions involving backing data up manually from EC2 to S3 to get the desired experience.
Amazon SimpleDB Has anyone considered using something along the lines of the Amazon SimpleDB data store as their backend database? SQL Server hosting (at least in the UK) is expensive so could something like this along with cloud file storage (S3) be used for building apps that could grow with your application. Great in theory but would anyone consider using it. In fact is anyone actually using it now for real production software as I would love to read your comments.
TITLE: Amazon SimpleDB QUESTION: Has anyone considered using something along the lines of the Amazon SimpleDB data store as their backend database? SQL Server hosting (at least in the UK) is expensive so could something like this along with cloud file storage (S3) be used for building apps that could grow with your application. Great in theory but would anyone consider using it. In fact is anyone actually using it now for real production software as I would love to read your comments. ANSWER: This is a good analysis of Amazon services from Dare. S3 handled what I've typically heard described as "blob storage". A typical Web application typically has media files and other resources (images, CSS stylesheets, scripts, video files, etc) that is simply accessed by name/path. However a lot of these resources also have metadata (e.g. a video file on YouTube has metadata about it's rating, who uploaded it, number of views, etc) which need to be stored as well. This need for queryable, schematized storage is where SimpleDB comes in. EC2 provides a virtual server that can be used for computation complete with a local file system instance which isn't persistent if the virtual server goes down for any reason. With SimpleDB and S3 you have the building blocks to build a large class of "Web 2.0" style applications when you throw in the computational capabilities provided by EC2. However neither S3 nor SimpleDB provides a solution for a developer who simply wants the typical LAMP or WISC developer experience of building a database driven Web application or for applications that may have custom storage needs that don't fit neatly into the buckets of blob storage or schematized storage. Without access to a persistent filesystem, developers on Amazon's cloud computing platform have had to come up with sophisticated solutions involving backing data up manually from EC2 to S3 to get the desired experience.
[ "amazon-s3", "amazon-web-services", "amazon-simpledb" ]
13
14
7,066
6
0
2008-09-10T09:26:58.680000
2008-09-10T13:15:03.310000
53,705
55,038
Creating MP4/M4A files with Chapter marks
I am trying to join together several audio files into one mp4/m4a file containing chapter metadata. I am currently using QTKit to do this but unfortunately when QTKit exports to m4a format the metadata is all stripped out (this has been confirmed as a bug by Apple) see sample code. I think this rules QTKit out for this job, but would be happy to be proven wrong as it is a really neat API for it if it worked. So, I am looking for a way to concatenate audio files (input format does not really matter as I can do conversion) into an m4a file with chapters metadata. As an alternative to code, I am open to the idea of using an existing command line tool to accomplish this as long as it is redistributable as part of another application. Any ideas?
Audiobook Maker does something like this, and I believe it uses ffmpeg under the hood. It's open source, so maybe its worth a look?
Creating MP4/M4A files with Chapter marks I am trying to join together several audio files into one mp4/m4a file containing chapter metadata. I am currently using QTKit to do this but unfortunately when QTKit exports to m4a format the metadata is all stripped out (this has been confirmed as a bug by Apple) see sample code. I think this rules QTKit out for this job, but would be happy to be proven wrong as it is a really neat API for it if it worked. So, I am looking for a way to concatenate audio files (input format does not really matter as I can do conversion) into an m4a file with chapters metadata. As an alternative to code, I am open to the idea of using an existing command line tool to accomplish this as long as it is redistributable as part of another application. Any ideas?
TITLE: Creating MP4/M4A files with Chapter marks QUESTION: I am trying to join together several audio files into one mp4/m4a file containing chapter metadata. I am currently using QTKit to do this but unfortunately when QTKit exports to m4a format the metadata is all stripped out (this has been confirmed as a bug by Apple) see sample code. I think this rules QTKit out for this job, but would be happy to be proven wrong as it is a really neat API for it if it worked. So, I am looking for a way to concatenate audio files (input format does not really matter as I can do conversion) into an m4a file with chapters metadata. As an alternative to code, I am open to the idea of using an existing command line tool to accomplish this as long as it is redistributable as part of another application. Any ideas? ANSWER: Audiobook Maker does something like this, and I believe it uses ffmpeg under the hood. It's open source, so maybe its worth a look?
[ "objective-c", "cocoa", "macos", "audio", "quicktime" ]
7
3
8,956
4
0
2008-09-10T09:39:22.160000
2008-09-10T18:58:58.913000
53,715
53,721
Does Delphi call inherited on overridden procedures if there is no explicit call
Does Delphi call inherited on overridden procedures if there is no explicit call in the code ie (inherited;), I have the following structure (from super to sub class) TForm >> TBaseForm >> TAnyOtherForm All the forms in the project will be derived from TBaseForm, as this will have all the standard set-up and destructive parts that are used for every form (security, validation ect). TBaseForm has onCreate and onDestroy procedures with the code to do this, but if someone (ie me) forgot to add inherited to the onCreate on TAnyOtherForm would Delphi call it for me? I have found references on the web that say it is not required, but nowhere says if it gets called if it is omitted from the code. Also if it does call inherited for me, when will it call it?
No, if you leave the call to inherited away, it will not be called. Otherwise it would not be possible to override a method and totally ommit the parent version of it.
Does Delphi call inherited on overridden procedures if there is no explicit call Does Delphi call inherited on overridden procedures if there is no explicit call in the code ie (inherited;), I have the following structure (from super to sub class) TForm >> TBaseForm >> TAnyOtherForm All the forms in the project will be derived from TBaseForm, as this will have all the standard set-up and destructive parts that are used for every form (security, validation ect). TBaseForm has onCreate and onDestroy procedures with the code to do this, but if someone (ie me) forgot to add inherited to the onCreate on TAnyOtherForm would Delphi call it for me? I have found references on the web that say it is not required, but nowhere says if it gets called if it is omitted from the code. Also if it does call inherited for me, when will it call it?
TITLE: Does Delphi call inherited on overridden procedures if there is no explicit call QUESTION: Does Delphi call inherited on overridden procedures if there is no explicit call in the code ie (inherited;), I have the following structure (from super to sub class) TForm >> TBaseForm >> TAnyOtherForm All the forms in the project will be derived from TBaseForm, as this will have all the standard set-up and destructive parts that are used for every form (security, validation ect). TBaseForm has onCreate and onDestroy procedures with the code to do this, but if someone (ie me) forgot to add inherited to the onCreate on TAnyOtherForm would Delphi call it for me? I have found references on the web that say it is not required, but nowhere says if it gets called if it is omitted from the code. Also if it does call inherited for me, when will it call it? ANSWER: No, if you leave the call to inherited away, it will not be called. Otherwise it would not be possible to override a method and totally ommit the parent version of it.
[ "delphi", "oop", "inheritance" ]
8
18
4,917
7
0
2008-09-10T09:48:34.650000
2008-09-10T09:53:15.123000
53,719
53,723
Hiding data points in Excel line charts
It is obviously possible to hide individual data points in an Excel line chart. Select a data point. Right click -> Format Data Point... Select Patterns Tab Set Line to None How do you accomplish the same thing in VBA? Intuition tells me there should be a property on the Point object Chart.SeriesCollection( ).Points( which deals with this...
"Describe it to the teddy bear" works almost every time... You have to go to the Border child object of the Point object and set its LineStyle to xlNone.
Hiding data points in Excel line charts It is obviously possible to hide individual data points in an Excel line chart. Select a data point. Right click -> Format Data Point... Select Patterns Tab Set Line to None How do you accomplish the same thing in VBA? Intuition tells me there should be a property on the Point object Chart.SeriesCollection( ).Points( which deals with this...
TITLE: Hiding data points in Excel line charts QUESTION: It is obviously possible to hide individual data points in an Excel line chart. Select a data point. Right click -> Format Data Point... Select Patterns Tab Set Line to None How do you accomplish the same thing in VBA? Intuition tells me there should be a property on the Point object Chart.SeriesCollection( ).Points( which deals with this... ANSWER: "Describe it to the teddy bear" works almost every time... You have to go to the Border child object of the Point object and set its LineStyle to xlNone.
[ "excel", "charts", "vba" ]
6
6
103,175
7
0
2008-09-10T09:50:49.620000
2008-09-10T09:56:59.217000
53,728
70,222
Will HTML Encoding prevent all kinds of XSS attacks?
I am not concerned about other kinds of attacks. Just want to know whether HTML Encode can prevent all kinds of XSS attacks. Is there some way to do an XSS attack even if HTML Encode is used?
No. Putting aside the subject of allowing some tags (not really the point of the question), HtmlEncode simply does NOT cover all XSS attacks. For instance, consider server-generated client-side javascript - the server dynamically outputs htmlencoded values directly into the client-side javascript, htmlencode will not stop injected script from executing. Next, consider the following pseudocode: id=textbox> Now, in case its not immediately obvious, if somevar (sent by the user, of course) is set for example to a onclick=alert(document.cookie) the resulting output is which would clearly work. Obviously, this can be (almost) any other script... and HtmlEncode would not help much. There are a few additional vectors to be considered... including the third flavor of XSS, called DOM-based XSS (wherein the malicious script is generated dynamically on the client, e.g. based on # values). Also don't forget about UTF-7 type attacks - where the attack looks like +ADw-script+AD4-alert(document.cookie)+ADw-/script+AD4- Nothing much to encode there... The solution, of course (in addition to proper and restrictive white-list input validation), is to perform context-sensitive encoding: HtmlEncoding is great IF you're output context IS HTML, or maybe you need JavaScriptEncoding, or VBScriptEncoding, or AttributeValueEncoding, or... etc. If you're using MS ASP.NET, you can use their Anti-XSS Library, which provides all of the necessary context-encoding methods. Note that all encoding should not be restricted to user input, but also stored values from the database, text files, etc. Oh, and don't forget to explicitly set the charset, both in the HTTP header AND the META tag, otherwise you'll still have UTF-7 vulnerabilities... Some more information, and a pretty definitive list (constantly updated), check out RSnake's Cheat Sheet: http://ha.ckers.org/xss.html
Will HTML Encoding prevent all kinds of XSS attacks? I am not concerned about other kinds of attacks. Just want to know whether HTML Encode can prevent all kinds of XSS attacks. Is there some way to do an XSS attack even if HTML Encode is used?
TITLE: Will HTML Encoding prevent all kinds of XSS attacks? QUESTION: I am not concerned about other kinds of attacks. Just want to know whether HTML Encode can prevent all kinds of XSS attacks. Is there some way to do an XSS attack even if HTML Encode is used? ANSWER: No. Putting aside the subject of allowing some tags (not really the point of the question), HtmlEncode simply does NOT cover all XSS attacks. For instance, consider server-generated client-side javascript - the server dynamically outputs htmlencoded values directly into the client-side javascript, htmlencode will not stop injected script from executing. Next, consider the following pseudocode: id=textbox> Now, in case its not immediately obvious, if somevar (sent by the user, of course) is set for example to a onclick=alert(document.cookie) the resulting output is which would clearly work. Obviously, this can be (almost) any other script... and HtmlEncode would not help much. There are a few additional vectors to be considered... including the third flavor of XSS, called DOM-based XSS (wherein the malicious script is generated dynamically on the client, e.g. based on # values). Also don't forget about UTF-7 type attacks - where the attack looks like +ADw-script+AD4-alert(document.cookie)+ADw-/script+AD4- Nothing much to encode there... The solution, of course (in addition to proper and restrictive white-list input validation), is to perform context-sensitive encoding: HtmlEncoding is great IF you're output context IS HTML, or maybe you need JavaScriptEncoding, or VBScriptEncoding, or AttributeValueEncoding, or... etc. If you're using MS ASP.NET, you can use their Anti-XSS Library, which provides all of the necessary context-encoding methods. Note that all encoding should not be restricted to user input, but also stored values from the database, text files, etc. Oh, and don't forget to explicitly set the charset, both in the HTTP header AND the META tag, otherwise you'll still have UTF-7 vulnerabilities... Some more information, and a pretty definitive list (constantly updated), check out RSnake's Cheat Sheet: http://ha.ckers.org/xss.html
[ "security", "xss", "html-encode" ]
70
96
67,580
9
0
2008-09-10T10:03:08.547000
2008-09-16T07:57:39.170000
53,734
153,680
Best use of indices on temporary tables in T-SQL
If you're creating a temporary table within a stored procedure and want to add an index or two on it, to improve the performance of any additional statements made against it, what is the best approach? Sybase says this: "the table must contain data when the index is created. If you create the temporary table and create the index on an empty table, Adaptive Server does not create column statistics such as histograms and densities. If you insert data rows after creating the index, the optimizer has incomplete statistics." but recently a colleague mentioned that if I create the temp table and indices in a different stored procedure to the one which actually uses the temporary table, then Adaptive Server optimiser will be able to make use of them. On the whole, I'm not a big fan of wrapper procedures that add little value, so I've not actually got around to testing this, but I thought I'd put the question out there, to see if anyone had any other approaches or advice?
A few thoughts: If your temporary table is so big that you have to index it, then is there a better way to solve the problem? You can force it to use the index (if you are sure that the index is the correct way to access the table) by giving an optimiser hint, of the form: SELECT * FROM #table (index idIndex) WHERE id = @id If you are interested in performance tips in general, I've answered a couple of other questions about that at some length here: Favourite performance tuning tricks How do you optimize tables for specific queries?
Best use of indices on temporary tables in T-SQL If you're creating a temporary table within a stored procedure and want to add an index or two on it, to improve the performance of any additional statements made against it, what is the best approach? Sybase says this: "the table must contain data when the index is created. If you create the temporary table and create the index on an empty table, Adaptive Server does not create column statistics such as histograms and densities. If you insert data rows after creating the index, the optimizer has incomplete statistics." but recently a colleague mentioned that if I create the temp table and indices in a different stored procedure to the one which actually uses the temporary table, then Adaptive Server optimiser will be able to make use of them. On the whole, I'm not a big fan of wrapper procedures that add little value, so I've not actually got around to testing this, but I thought I'd put the question out there, to see if anyone had any other approaches or advice?
TITLE: Best use of indices on temporary tables in T-SQL QUESTION: If you're creating a temporary table within a stored procedure and want to add an index or two on it, to improve the performance of any additional statements made against it, what is the best approach? Sybase says this: "the table must contain data when the index is created. If you create the temporary table and create the index on an empty table, Adaptive Server does not create column statistics such as histograms and densities. If you insert data rows after creating the index, the optimizer has incomplete statistics." but recently a colleague mentioned that if I create the temp table and indices in a different stored procedure to the one which actually uses the temporary table, then Adaptive Server optimiser will be able to make use of them. On the whole, I'm not a big fan of wrapper procedures that add little value, so I've not actually got around to testing this, but I thought I'd put the question out there, to see if anyone had any other approaches or advice? ANSWER: A few thoughts: If your temporary table is so big that you have to index it, then is there a better way to solve the problem? You can force it to use the index (if you are sure that the index is the correct way to access the table) by giving an optimiser hint, of the form: SELECT * FROM #table (index idIndex) WHERE id = @id If you are interested in performance tips in general, I've answered a couple of other questions about that at some length here: Favourite performance tuning tricks How do you optimize tables for specific queries?
[ "t-sql", "indexing", "sap-ase", "temp-tables" ]
13
7
22,920
3
0
2008-09-10T10:08:05.493000
2008-09-30T15:53:12.073000
53,757
53,759
Which compiles to faster code: "n * 3" or "n+(n*2)"?
Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"? Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box. Would this be different if there was some dereferencing involved, that is, which of these would be faster? long a; long *pn; long ans;... *pn = some_number; ans = *pn * 3; Or ans = *pn+(*pn*2); Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case?
IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place.
Which compiles to faster code: "n * 3" or "n+(n*2)"? Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"? Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box. Would this be different if there was some dereferencing involved, that is, which of these would be faster? long a; long *pn; long ans;... *pn = some_number; ans = *pn * 3; Or ans = *pn+(*pn*2); Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case?
TITLE: Which compiles to faster code: "n * 3" or "n+(n*2)"? QUESTION: Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"? Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box. Would this be different if there was some dereferencing involved, that is, which of these would be faster? long a; long *pn; long ans;... *pn = some_number; ans = *pn * 3; Or ans = *pn+(*pn*2); Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case? ANSWER: IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place.
[ "c++", "c", "optimization", "compiler-construction", "performance" ]
7
56
1,885
11
0
2008-09-10T10:44:33.453000
2008-09-10T10:48:08.657000
53,766
53,774
Best Way of Automating Daily Build
OK, so we all know the daily build is the heart beat of a project, but whats the single best way of automating it? We have perl scripts wrapping our pipeline which includes ClearCase, VS2005 (C++), Intel FORTRAN, Inno setup. We use cron jobs on UNIX to schedule the build, and host a simple Apache web server to view and monitor the build. All in all its rather complex, I would like to know whats the best off the shelf solution that people use? And yes I did say FORTRAN no escaping it sometimes, it works, no point doing a huge re-implementation project for some tried and tested FEA code that just works.
We're in the process of implementing CC.Net. So far it seems like it would fit your model pretty well. Out of the box it offers automated building, results tracking and notification. I'm not sure how detailed the build-in-progress monitoring is though.
Best Way of Automating Daily Build OK, so we all know the daily build is the heart beat of a project, but whats the single best way of automating it? We have perl scripts wrapping our pipeline which includes ClearCase, VS2005 (C++), Intel FORTRAN, Inno setup. We use cron jobs on UNIX to schedule the build, and host a simple Apache web server to view and monitor the build. All in all its rather complex, I would like to know whats the best off the shelf solution that people use? And yes I did say FORTRAN no escaping it sometimes, it works, no point doing a huge re-implementation project for some tried and tested FEA code that just works.
TITLE: Best Way of Automating Daily Build QUESTION: OK, so we all know the daily build is the heart beat of a project, but whats the single best way of automating it? We have perl scripts wrapping our pipeline which includes ClearCase, VS2005 (C++), Intel FORTRAN, Inno setup. We use cron jobs on UNIX to schedule the build, and host a simple Apache web server to view and monitor the build. All in all its rather complex, I would like to know whats the best off the shelf solution that people use? And yes I did say FORTRAN no escaping it sometimes, it works, no point doing a huge re-implementation project for some tried and tested FEA code that just works. ANSWER: We're in the process of implementing CC.Net. So far it seems like it would fit your model pretty well. Out of the box it offers automated building, results tracking and notification. I'm not sure how detailed the build-in-progress monitoring is though.
[ "continuous-integration", "build" ]
7
3
1,143
9
0
2008-09-10T10:51:57.197000
2008-09-10T10:56:09.683000
53,786
56,548
How do i generate a histogram for a given probability distribution (for functional testing a server)?
I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation) I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly) I use this distribution to generate the number of requests that should be sent each day between a given start and end date I've hacked together an algorithm in Python that sort of works but it feels kludgy: how_many_days = (end_date - start_date).days freqs = defaultdict(int) for x in xrange(how_many_responses): freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1 timeline = [] day = start_date for i,freq in sorted(freqs.iteritems()): timeline.append((day, freq)) day += timedelta(days=1) return timeline What better ways are there to do this?
This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy. import math from datetime import datetime, timedelta, date from random import gauss how_many_responses = 1000 start_date = date(2008, 5, 1) end_date = date(2008, 6, 1) num_days = (end_date - start_date).days + 1 timeline = [start_date + timedelta(i) for i in xrange(num_days)] def weibull(x, k, l): return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k) dev = 0.1 samples = [i * 1.25/(num_days-1) for i in range(num_days)] probs = [weibull(i, 2, 0.5) for i in samples] noise = [gauss(0, dev) for i in samples] simdata = [max(0., e + n) for (e, n) in zip(probs, noise)] events = [int(p * (how_many_responses / sum(probs))) for p in simdata] histogram = zip(timeline, events) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram)
How do i generate a histogram for a given probability distribution (for functional testing a server)? I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation) I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly) I use this distribution to generate the number of requests that should be sent each day between a given start and end date I've hacked together an algorithm in Python that sort of works but it feels kludgy: how_many_days = (end_date - start_date).days freqs = defaultdict(int) for x in xrange(how_many_responses): freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1 timeline = [] day = start_date for i,freq in sorted(freqs.iteritems()): timeline.append((day, freq)) day += timedelta(days=1) return timeline What better ways are there to do this?
TITLE: How do i generate a histogram for a given probability distribution (for functional testing a server)? QUESTION: I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation) I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly) I use this distribution to generate the number of requests that should be sent each day between a given start and end date I've hacked together an algorithm in Python that sort of works but it feels kludgy: how_many_days = (end_date - start_date).days freqs = defaultdict(int) for x in xrange(how_many_responses): freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1 timeline = [] day = start_date for i,freq in sorted(freqs.iteritems()): timeline.append((day, freq)) day += timedelta(days=1) return timeline What better ways are there to do this? ANSWER: This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy. import math from datetime import datetime, timedelta, date from random import gauss how_many_responses = 1000 start_date = date(2008, 5, 1) end_date = date(2008, 6, 1) num_days = (end_date - start_date).days + 1 timeline = [start_date + timedelta(i) for i in xrange(num_days)] def weibull(x, k, l): return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k) dev = 0.1 samples = [i * 1.25/(num_days-1) for i in range(num_days)] probs = [weibull(i, 2, 0.5) for i in samples] noise = [gauss(0, dev) for i in samples] simdata = [max(0., e + n) for (e, n) in zip(probs, noise)] events = [int(p * (how_many_responses / sum(probs))) for p in simdata] histogram = zip(timeline, events) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram)
[ "python", "statistics", "simulation", "stress-testing" ]
4
1
1,862
6
0
2008-09-10T11:16:30.040000
2008-09-11T13:40:31.730000
53,796
53,870
Is WindowsFormsHost fit for purpose (.net WPF hosting WinForms)?
A GUI driven application needs to host some prebuilt WinForms based components. These components provide high performance interactive views using a mixture of GDI+ and DirectX. The views handle control input and display custom graphical renderings. The components are tested in a WinForms harness by the supplier. Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or have you experience of technical glitches e.g. input lags, update issues that would make you cautious?
We're currently using WindowsFormsHost in our software to host the WinForms DataGridView control, and we've not had any real problems with it. A few things to watch out for though: The first is the air-space restrictions. Practically speaking, this means that WinForms content always appears on top of WPF content. So if you are using WPF adorners they will appear to be "trimmed" if they hit up against a WinForms region in your app. The second is that, because they use Windows resources, you have to manage the lifetimes of WinForms components more carefully. Unlike WPF components, WinForms controls expect to be Disposed when they're finished with. This makes it tricky to include them in a pure XAML view. The last thing is that WinForms controls don't seem to resize as smoothly as the rest of the WPF display: they tend to snap to their new size once you've finished making an adjustment.
Is WindowsFormsHost fit for purpose (.net WPF hosting WinForms)? A GUI driven application needs to host some prebuilt WinForms based components. These components provide high performance interactive views using a mixture of GDI+ and DirectX. The views handle control input and display custom graphical renderings. The components are tested in a WinForms harness by the supplier. Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or have you experience of technical glitches e.g. input lags, update issues that would make you cautious?
TITLE: Is WindowsFormsHost fit for purpose (.net WPF hosting WinForms)? QUESTION: A GUI driven application needs to host some prebuilt WinForms based components. These components provide high performance interactive views using a mixture of GDI+ and DirectX. The views handle control input and display custom graphical renderings. The components are tested in a WinForms harness by the supplier. Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or have you experience of technical glitches e.g. input lags, update issues that would make you cautious? ANSWER: We're currently using WindowsFormsHost in our software to host the WinForms DataGridView control, and we've not had any real problems with it. A few things to watch out for though: The first is the air-space restrictions. Practically speaking, this means that WinForms content always appears on top of WPF content. So if you are using WPF adorners they will appear to be "trimmed" if they hit up against a WinForms region in your app. The second is that, because they use Windows resources, you have to manage the lifetimes of WinForms components more carefully. Unlike WPF components, WinForms controls expect to be Disposed when they're finished with. This makes it tricky to include them in a pure XAML view. The last thing is that WinForms controls don't seem to resize as smoothly as the rest of the WPF display: they tend to snap to their new size once you've finished making an adjustment.
[ ".net", "wpf", "winforms" ]
10
21
10,613
6
0
2008-09-10T11:29:13.020000
2008-09-10T12:25:02.293000
53,802
54,121
What is the best tool to benchmark my JavaScript?
I'm currently working on a JavaScript tool that, during the course of its execution, will ultimately traverse each node in the DOM. Because this has potential to be a very expensive task, I'd like to benchmark the performance of this script. What's the best, free tool for benchmarking a script such as this across the major browsers? Ideally, I'd like the tool (or set of tools, even): **To generate some form of report based on the results of the test.** It can be as simple as a table showing execution times, or as complex as generating some form of a chart. Either way is fine. **To be free.** it's not that I don't believe in paying for software, it's just that I don't have a major need for a tool like this in my typical day-to-day tasks. If possible, I'd also like the tool to generate varying levels of complex pages so that I can stress test a set of DOMs. This isn't a necessity - if I need to do so, I can write one myself; however, I'd figure I'd poll the community first to see if something already exists.
Firebug does include JS profiling, and it is probably the best out there. While I've had problems with Firebug's debugger, its profiler is currently top-of-the-line. Venkman is also an older JS debugger/profiler for Firefox, just in case you run into Firebug issues. Using these tools should get you just about all the profiling you need across all browsers even though you'll only be monitoring Firefox. If you truly need to get down to dirty details of IE profiling and the like, there are a number of tools online that inject profiling calls into your javascript to help monitor all profiler-lacking browsers....but even to a JS performance nazi like me, this seems unnecessary. Note: A new, very promising IE8 JS profiler has recently been announced: http://blogs.msdn.com/ie/archive/2008/09/11/introducing-the-ie8-developer-tools-jscript-profiler.aspx.
What is the best tool to benchmark my JavaScript? I'm currently working on a JavaScript tool that, during the course of its execution, will ultimately traverse each node in the DOM. Because this has potential to be a very expensive task, I'd like to benchmark the performance of this script. What's the best, free tool for benchmarking a script such as this across the major browsers? Ideally, I'd like the tool (or set of tools, even): **To generate some form of report based on the results of the test.** It can be as simple as a table showing execution times, or as complex as generating some form of a chart. Either way is fine. **To be free.** it's not that I don't believe in paying for software, it's just that I don't have a major need for a tool like this in my typical day-to-day tasks. If possible, I'd also like the tool to generate varying levels of complex pages so that I can stress test a set of DOMs. This isn't a necessity - if I need to do so, I can write one myself; however, I'd figure I'd poll the community first to see if something already exists.
TITLE: What is the best tool to benchmark my JavaScript? QUESTION: I'm currently working on a JavaScript tool that, during the course of its execution, will ultimately traverse each node in the DOM. Because this has potential to be a very expensive task, I'd like to benchmark the performance of this script. What's the best, free tool for benchmarking a script such as this across the major browsers? Ideally, I'd like the tool (or set of tools, even): **To generate some form of report based on the results of the test.** It can be as simple as a table showing execution times, or as complex as generating some form of a chart. Either way is fine. **To be free.** it's not that I don't believe in paying for software, it's just that I don't have a major need for a tool like this in my typical day-to-day tasks. If possible, I'd also like the tool to generate varying levels of complex pages so that I can stress test a set of DOMs. This isn't a necessity - if I need to do so, I can write one myself; however, I'd figure I'd poll the community first to see if something already exists. ANSWER: Firebug does include JS profiling, and it is probably the best out there. While I've had problems with Firebug's debugger, its profiler is currently top-of-the-line. Venkman is also an older JS debugger/profiler for Firefox, just in case you run into Firebug issues. Using these tools should get you just about all the profiling you need across all browsers even though you'll only be monitoring Firefox. If you truly need to get down to dirty details of IE profiling and the like, there are a number of tools online that inject profiling calls into your javascript to help monitor all profiler-lacking browsers....but even to a JS performance nazi like me, this seems unnecessary. Note: A new, very promising IE8 JS profiler has recently been announced: http://blogs.msdn.com/ie/archive/2008/09/11/introducing-the-ie8-developer-tools-jscript-profiler.aspx.
[ "javascript", "dom" ]
3
6
2,125
5
0
2008-09-10T11:33:53.727000
2008-09-10T14:07:08.477000
53,803
53,846
Can I turn off automatic merging in Subversion?
We're looking at moving from a check-out/edit/check-in style of version control system to Subversion, and during the evaluation we discovered that when you perform an Update action in TortoiseSVN (and presumably in any Subversion client?), if changes in the repository that need to be applied to files that you've been editing don't cause any conflicts then they'll be automatically/silently merged. This scares us a little, as it's possible that this merge, while not producing any compile errors, could at least introduce some logic errors that may not be easily detected. Very simple example: I'm working within a C# method changing some logic in the latter-part of the method, and somebody else changes the value that a variable gets initialised to at the start of the method. The other person's change isn't in the lines of code that I'm working on so there won't be a conflict; but it's possible to dramatically change the output of the method. What we were hoping the situation would be is that if a merge needs to occur, then the two files would be shown and at least a simple accept/reject change option be presented, so that at least we're aware that something has changed and are given the option to see if it impacts our code. Is there a way to do this with Subversion/TortoiseSVN? Or are we stuck in our present working ways too much and should just let it do it's thing...
The best way around this is to educate the developers. After you do an update in TortoiseSVN it shows you a list of affected files. Simply double clicking each file will give you the diff between them. Then you'll be able to see what changed between your version and the latest repository version.
Can I turn off automatic merging in Subversion? We're looking at moving from a check-out/edit/check-in style of version control system to Subversion, and during the evaluation we discovered that when you perform an Update action in TortoiseSVN (and presumably in any Subversion client?), if changes in the repository that need to be applied to files that you've been editing don't cause any conflicts then they'll be automatically/silently merged. This scares us a little, as it's possible that this merge, while not producing any compile errors, could at least introduce some logic errors that may not be easily detected. Very simple example: I'm working within a C# method changing some logic in the latter-part of the method, and somebody else changes the value that a variable gets initialised to at the start of the method. The other person's change isn't in the lines of code that I'm working on so there won't be a conflict; but it's possible to dramatically change the output of the method. What we were hoping the situation would be is that if a merge needs to occur, then the two files would be shown and at least a simple accept/reject change option be presented, so that at least we're aware that something has changed and are given the option to see if it impacts our code. Is there a way to do this with Subversion/TortoiseSVN? Or are we stuck in our present working ways too much and should just let it do it's thing...
TITLE: Can I turn off automatic merging in Subversion? QUESTION: We're looking at moving from a check-out/edit/check-in style of version control system to Subversion, and during the evaluation we discovered that when you perform an Update action in TortoiseSVN (and presumably in any Subversion client?), if changes in the repository that need to be applied to files that you've been editing don't cause any conflicts then they'll be automatically/silently merged. This scares us a little, as it's possible that this merge, while not producing any compile errors, could at least introduce some logic errors that may not be easily detected. Very simple example: I'm working within a C# method changing some logic in the latter-part of the method, and somebody else changes the value that a variable gets initialised to at the start of the method. The other person's change isn't in the lines of code that I'm working on so there won't be a conflict; but it's possible to dramatically change the output of the method. What we were hoping the situation would be is that if a merge needs to occur, then the two files would be shown and at least a simple accept/reject change option be presented, so that at least we're aware that something has changed and are given the option to see if it impacts our code. Is there a way to do this with Subversion/TortoiseSVN? Or are we stuck in our present working ways too much and should just let it do it's thing... ANSWER: The best way around this is to educate the developers. After you do an update in TortoiseSVN it shows you a list of affected files. Simply double clicking each file will give you the diff between them. Then you'll be able to see what changed between your version and the latest repository version.
[ "svn", "tortoisesvn" ]
19
6
13,705
5
0
2008-09-10T11:34:45.800000
2008-09-10T12:09:28.883000
53,807
53,835
Can you do "builds" with PHP scripts or an interpreted language?
Correct me if I'm wrong, but a "build" is a "compile", and not every language compiles. Continuous Integration involves building components to see if they continue to work beyond unit tests, which I might be oversimplifying. But if your project involves a language that does not compile, how do you perform nightly builds or use continuous integration techniques?
Hmm... I'd define "building" as something like "preparing, packaging and deploying all artifacts of a software system". The compilation to machine code is only one of many steps in the build. Others might be checking out the latest version of the code from scm-system, getting external dependencies, setting configuration values depending on the target the software gets deployed to and running some kind of test suite to ensure you've got a "working/running build" before you actually deploy. "Building" software can/must be done for any software, independent of your programming langugage. Intepreted languages have the "disadvantage" that syntactic or structural (meaning e.g. calling a method with wrong parameters etc.) errors normally will only be detected at runtime (if you don't have a separate step in your build which checks for such errors e.g. with PHPLint ). Thus (automated) Testcases (like Unit-Tests - see PHPUnit or SimpleTest - and Frontend-Tests - see Selenium ) are all the more important for big PHP projects to ensure the good health of the code. There's a great Build-Tool (like Ant for Java or Rake for Ruby) for PHP too: Phing CI-Systems like Xinc or Hudson are simply used to automagically (like anytime a change is checked into scm) package your code, check it for obvious errors, run your tests (in short: run your build) and report the results back to your development team.
Can you do "builds" with PHP scripts or an interpreted language? Correct me if I'm wrong, but a "build" is a "compile", and not every language compiles. Continuous Integration involves building components to see if they continue to work beyond unit tests, which I might be oversimplifying. But if your project involves a language that does not compile, how do you perform nightly builds or use continuous integration techniques?
TITLE: Can you do "builds" with PHP scripts or an interpreted language? QUESTION: Correct me if I'm wrong, but a "build" is a "compile", and not every language compiles. Continuous Integration involves building components to see if they continue to work beyond unit tests, which I might be oversimplifying. But if your project involves a language that does not compile, how do you perform nightly builds or use continuous integration techniques? ANSWER: Hmm... I'd define "building" as something like "preparing, packaging and deploying all artifacts of a software system". The compilation to machine code is only one of many steps in the build. Others might be checking out the latest version of the code from scm-system, getting external dependencies, setting configuration values depending on the target the software gets deployed to and running some kind of test suite to ensure you've got a "working/running build" before you actually deploy. "Building" software can/must be done for any software, independent of your programming langugage. Intepreted languages have the "disadvantage" that syntactic or structural (meaning e.g. calling a method with wrong parameters etc.) errors normally will only be detected at runtime (if you don't have a separate step in your build which checks for such errors e.g. with PHPLint ). Thus (automated) Testcases (like Unit-Tests - see PHPUnit or SimpleTest - and Frontend-Tests - see Selenium ) are all the more important for big PHP projects to ensure the good health of the code. There's a great Build-Tool (like Ant for Java or Rake for Ruby) for PHP too: Phing CI-Systems like Xinc or Hudson are simply used to automagically (like anytime a change is checked into scm) package your code, check it for obvious errors, run your tests (in short: run your build) and report the results back to your development team.
[ "build-process", "continuous-integration", "interpreted-language" ]
1
6
367
2
0
2008-09-10T11:41:01.087000
2008-09-10T12:01:40.960000
53,808
53,830
Good 15 minute Java question to ask recent college graduate
When interviewing college coops/interns or recent graduates it helps to have a Java programming question that they can do on a white board in 15 minutes. Does anyone have examples of good questions like this? A C++ question I was once asked in an interview was to write a string to integer function which is along the lines of the level of question I am looking for examples of.
Some stuff that has showed up on SO: IsPalindrome(string s) ReverseWordsInString(string s): "I know java" --> "java know I" Other stuff that springs to mind: multiply a Vector with a Matrix (can this be done OO-Style?) echo (yes, a simple clone of the unix tool) cat (15 min should be enough, should weed out the clueless) a simple container for int s. Like ArrayList. Bonus question: Generic?
Good 15 minute Java question to ask recent college graduate When interviewing college coops/interns or recent graduates it helps to have a Java programming question that they can do on a white board in 15 minutes. Does anyone have examples of good questions like this? A C++ question I was once asked in an interview was to write a string to integer function which is along the lines of the level of question I am looking for examples of.
TITLE: Good 15 minute Java question to ask recent college graduate QUESTION: When interviewing college coops/interns or recent graduates it helps to have a Java programming question that they can do on a white board in 15 minutes. Does anyone have examples of good questions like this? A C++ question I was once asked in an interview was to write a string to integer function which is along the lines of the level of question I am looking for examples of. ANSWER: Some stuff that has showed up on SO: IsPalindrome(string s) ReverseWordsInString(string s): "I know java" --> "java know I" Other stuff that springs to mind: multiply a Vector with a Matrix (can this be done OO-Style?) echo (yes, a simple clone of the unix tool) cat (15 min should be enough, should weed out the clueless) a simple container for int s. Like ArrayList. Bonus question: Generic?
[ "java" ]
6
6
8,375
8
0
2008-09-10T11:41:06.237000
2008-09-10T11:55:54.407000
53,811
53,826
How do you normally set up your compiler's optimization settings?
Do you normally set your compiler to optimize for maximum speed or smallest code size? or do you manually configure individual optimization settings? Why? I notice most of the time people tend to just leave compiler optimization settings to their default state, which with visual c++ means max speed. I've always felt that the default settings had more to do with looking good on benchmarks, which tend to be small programs that will fit entirely within the L2 cache than what's best for overall performance, so I normally set it optimize for smallest size.
As a Gentoo user I have tried quite a few optimizations on the complete OS and there have been endless discussions on the Gentoo forums about it. Some good flags for GCC can be found in the wiki. In short, optimizing for size worked best on an old Pentium3 laptop with limited ram, but on my main desktop machine with a Core2Duo, -O2 gave better results over all. There's also a small script if you are interested in the x86 (32 bit) specific flags that are the most optimized. If you use gcc and really want to optimize a specific application, try ACOVEA. It runs a set of benchmarks, then recompile them with all possible combinations of compile flags. There's an example using Huffman encoding on the site (lower is better): A relative graph of fitnesses: Acovea Best-of-the-Best: ************************************** (2.55366) Acovea Common Options: ******************************************* (2.86788) -O1: ********************************************** (3.0752) -O2: *********************************************** (3.12343) -O3: *********************************************** (3.1277) -O3 -ffast-math: ************************************************** (3.31539) -Os: ************************************************* (3.30573) (Note that it found -Os to be the slowest on this Opteron system.)
How do you normally set up your compiler's optimization settings? Do you normally set your compiler to optimize for maximum speed or smallest code size? or do you manually configure individual optimization settings? Why? I notice most of the time people tend to just leave compiler optimization settings to their default state, which with visual c++ means max speed. I've always felt that the default settings had more to do with looking good on benchmarks, which tend to be small programs that will fit entirely within the L2 cache than what's best for overall performance, so I normally set it optimize for smallest size.
TITLE: How do you normally set up your compiler's optimization settings? QUESTION: Do you normally set your compiler to optimize for maximum speed or smallest code size? or do you manually configure individual optimization settings? Why? I notice most of the time people tend to just leave compiler optimization settings to their default state, which with visual c++ means max speed. I've always felt that the default settings had more to do with looking good on benchmarks, which tend to be small programs that will fit entirely within the L2 cache than what's best for overall performance, so I normally set it optimize for smallest size. ANSWER: As a Gentoo user I have tried quite a few optimizations on the complete OS and there have been endless discussions on the Gentoo forums about it. Some good flags for GCC can be found in the wiki. In short, optimizing for size worked best on an old Pentium3 laptop with limited ram, but on my main desktop machine with a Core2Duo, -O2 gave better results over all. There's also a small script if you are interested in the x86 (32 bit) specific flags that are the most optimized. If you use gcc and really want to optimize a specific application, try ACOVEA. It runs a set of benchmarks, then recompile them with all possible combinations of compile flags. There's an example using Huffman encoding on the site (lower is better): A relative graph of fitnesses: Acovea Best-of-the-Best: ************************************** (2.55366) Acovea Common Options: ******************************************* (2.86788) -O1: ********************************************** (3.0752) -O2: *********************************************** (3.12343) -O3: *********************************************** (3.1277) -O3 -ffast-math: ************************************************** (3.31539) -Os: ************************************************* (3.30573) (Note that it found -Os to be the slowest on this Opteron system.)
[ "c++", "c", "optimization", "compiler-optimization" ]
6
6
916
11
0
2008-09-10T11:41:42.630000
2008-09-10T11:52:50.093000
53,820
53,831
Why does windows XP minimize my swing full screen window on my second screen?
In the application I'm developping (in Java/swing), I have to show a full screen window on the second screen of the user. I did this using a code similar to the one you'll find below... Be, as soon as I click in a window opened by windows explorer, or as soon as I open windows explorer (i'm using windows XP), the full screen window is minimized... Do you know any way or workaround to fix this problem, or is there something important I did not understand with full screen windows? Thanks for the help, import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JWindow; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Window; import javax.swing.JButton; import javax.swing.JToggleButton; import java.awt.Rectangle; import java.awt.GridBagLayout; import javax.swing.JLabel; public class FullScreenTest { private JFrame jFrame = null; // @jve:decl-index=0:visual-constraint="94,35" private JPanel jContentPane = null; private JToggleButton jToggleButton = null; private JPanel jFSPanel = null; // @jve:decl-index=0:visual-constraint="392,37" private JLabel jLabel = null; private Window window; /** * This method initializes jFrame * * @return javax.swing.JFrame */ private JFrame getJFrame() { if (jFrame == null) { jFrame = new JFrame(); jFrame.setSize(new Dimension(474, 105)); jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jFrame.setContentPane(getJContentPane()); } return jFrame; } /** * This method initializes jContentPane * * @return javax.swing.JPanel */ private JPanel getJContentPane() { if (jContentPane == null) { jContentPane = new JPanel(); jContentPane.setLayout(null); jContentPane.add(getJToggleButton(), null); } return jContentPane; } /** * This method initializes jToggleButton * * @return javax.swing.JToggleButton */ private JToggleButton getJToggleButton() { if (jToggleButton == null) { jToggleButton = new JToggleButton(); jToggleButton.setBounds(new Rectangle(50, 23, 360, 28)); jToggleButton.setText("Show Full Screen Window on 2nd screen"); jToggleButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent e) { showFullScreenWindow(jToggleButton.isSelected()); } }); } return jToggleButton; } protected void showFullScreenWindow(boolean b) { if(window==null){ window = initFullScreenWindow(); } window.setVisible(b); } private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); gd.setFullScreenWindow(window); return window; } /** * This method initializes jFSPanel * * @return javax.swing.JPanel */ private JPanel getJFSPanel() { if (jFSPanel == null) { jLabel = new JLabel(); jLabel.setBounds(new Rectangle(18, 19, 500, 66)); jLabel.setText("Hello! Now, juste open windows explorer and see what happens..."); jFSPanel = new JPanel(); jFSPanel.setLayout(null); jFSPanel.setSize(new Dimension(500, 107)); jFSPanel.add(jLabel, null); } return jFSPanel; } /** * @param args */ public static void main(String[] args) { FullScreenTest me = new FullScreenTest(); me.getJFrame().setVisible(true); } }
Usually when an application is in "full screen" mode it will take over the entire desktop. For a user to get to another window they would have to alt-tab to it. At that point windows would minimize the full screen app so that the other application could come to the front. This sounds like it may be a bug (undocumented feature...) in windows. It should probably not be doing this for a dual screen setup. One option to fix this is rather than setting it to be "full screen" just make the window the same size as the screen with location (0,0). You can get screen information from the GraphicsConfigurations on the GraphicsDevice.
Why does windows XP minimize my swing full screen window on my second screen? In the application I'm developping (in Java/swing), I have to show a full screen window on the second screen of the user. I did this using a code similar to the one you'll find below... Be, as soon as I click in a window opened by windows explorer, or as soon as I open windows explorer (i'm using windows XP), the full screen window is minimized... Do you know any way or workaround to fix this problem, or is there something important I did not understand with full screen windows? Thanks for the help, import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JWindow; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Window; import javax.swing.JButton; import javax.swing.JToggleButton; import java.awt.Rectangle; import java.awt.GridBagLayout; import javax.swing.JLabel; public class FullScreenTest { private JFrame jFrame = null; // @jve:decl-index=0:visual-constraint="94,35" private JPanel jContentPane = null; private JToggleButton jToggleButton = null; private JPanel jFSPanel = null; // @jve:decl-index=0:visual-constraint="392,37" private JLabel jLabel = null; private Window window; /** * This method initializes jFrame * * @return javax.swing.JFrame */ private JFrame getJFrame() { if (jFrame == null) { jFrame = new JFrame(); jFrame.setSize(new Dimension(474, 105)); jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jFrame.setContentPane(getJContentPane()); } return jFrame; } /** * This method initializes jContentPane * * @return javax.swing.JPanel */ private JPanel getJContentPane() { if (jContentPane == null) { jContentPane = new JPanel(); jContentPane.setLayout(null); jContentPane.add(getJToggleButton(), null); } return jContentPane; } /** * This method initializes jToggleButton * * @return javax.swing.JToggleButton */ private JToggleButton getJToggleButton() { if (jToggleButton == null) { jToggleButton = new JToggleButton(); jToggleButton.setBounds(new Rectangle(50, 23, 360, 28)); jToggleButton.setText("Show Full Screen Window on 2nd screen"); jToggleButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent e) { showFullScreenWindow(jToggleButton.isSelected()); } }); } return jToggleButton; } protected void showFullScreenWindow(boolean b) { if(window==null){ window = initFullScreenWindow(); } window.setVisible(b); } private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); gd.setFullScreenWindow(window); return window; } /** * This method initializes jFSPanel * * @return javax.swing.JPanel */ private JPanel getJFSPanel() { if (jFSPanel == null) { jLabel = new JLabel(); jLabel.setBounds(new Rectangle(18, 19, 500, 66)); jLabel.setText("Hello! Now, juste open windows explorer and see what happens..."); jFSPanel = new JPanel(); jFSPanel.setLayout(null); jFSPanel.setSize(new Dimension(500, 107)); jFSPanel.add(jLabel, null); } return jFSPanel; } /** * @param args */ public static void main(String[] args) { FullScreenTest me = new FullScreenTest(); me.getJFrame().setVisible(true); } }
TITLE: Why does windows XP minimize my swing full screen window on my second screen? QUESTION: In the application I'm developping (in Java/swing), I have to show a full screen window on the second screen of the user. I did this using a code similar to the one you'll find below... Be, as soon as I click in a window opened by windows explorer, or as soon as I open windows explorer (i'm using windows XP), the full screen window is minimized... Do you know any way or workaround to fix this problem, or is there something important I did not understand with full screen windows? Thanks for the help, import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JWindow; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Window; import javax.swing.JButton; import javax.swing.JToggleButton; import java.awt.Rectangle; import java.awt.GridBagLayout; import javax.swing.JLabel; public class FullScreenTest { private JFrame jFrame = null; // @jve:decl-index=0:visual-constraint="94,35" private JPanel jContentPane = null; private JToggleButton jToggleButton = null; private JPanel jFSPanel = null; // @jve:decl-index=0:visual-constraint="392,37" private JLabel jLabel = null; private Window window; /** * This method initializes jFrame * * @return javax.swing.JFrame */ private JFrame getJFrame() { if (jFrame == null) { jFrame = new JFrame(); jFrame.setSize(new Dimension(474, 105)); jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jFrame.setContentPane(getJContentPane()); } return jFrame; } /** * This method initializes jContentPane * * @return javax.swing.JPanel */ private JPanel getJContentPane() { if (jContentPane == null) { jContentPane = new JPanel(); jContentPane.setLayout(null); jContentPane.add(getJToggleButton(), null); } return jContentPane; } /** * This method initializes jToggleButton * * @return javax.swing.JToggleButton */ private JToggleButton getJToggleButton() { if (jToggleButton == null) { jToggleButton = new JToggleButton(); jToggleButton.setBounds(new Rectangle(50, 23, 360, 28)); jToggleButton.setText("Show Full Screen Window on 2nd screen"); jToggleButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent e) { showFullScreenWindow(jToggleButton.isSelected()); } }); } return jToggleButton; } protected void showFullScreenWindow(boolean b) { if(window==null){ window = initFullScreenWindow(); } window.setVisible(b); } private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); gd.setFullScreenWindow(window); return window; } /** * This method initializes jFSPanel * * @return javax.swing.JPanel */ private JPanel getJFSPanel() { if (jFSPanel == null) { jLabel = new JLabel(); jLabel.setBounds(new Rectangle(18, 19, 500, 66)); jLabel.setText("Hello! Now, juste open windows explorer and see what happens..."); jFSPanel = new JPanel(); jFSPanel.setLayout(null); jFSPanel.setSize(new Dimension(500, 107)); jFSPanel.add(jLabel, null); } return jFSPanel; } /** * @param args */ public static void main(String[] args) { FullScreenTest me = new FullScreenTest(); me.getJFrame().setVisible(true); } } ANSWER: Usually when an application is in "full screen" mode it will take over the entire desktop. For a user to get to another window they would have to alt-tab to it. At that point windows would minimize the full screen app so that the other application could come to the front. This sounds like it may be a bug (undocumented feature...) in windows. It should probably not be doing this for a dual screen setup. One option to fix this is rather than setting it to be "full screen" just make the window the same size as the screen with location (0,0). You can get screen information from the GraphicsConfigurations on the GraphicsDevice.
[ "java", "windows", "swing" ]
4
1
4,566
2
0
2008-09-10T11:47:55.860000
2008-09-10T11:57:02.483000
53,824
62,274
How to specify accepted certificates for Client Authentication in .NET SslStream
I am attempting to use the.Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication. To perform the handshake, I am using this code: SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback); sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false); Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store. I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store. It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too. Any ideas?
It does not look like this is currently possible using the.NET libraries. I solved it by using the Mono class library implementation of System.Security.SslStream, which gives better access to overriding the servers behavior during the handshake.
How to specify accepted certificates for Client Authentication in .NET SslStream I am attempting to use the.Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication. To perform the handshake, I am using this code: SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback); sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false); Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store. I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store. It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too. Any ideas?
TITLE: How to specify accepted certificates for Client Authentication in .NET SslStream QUESTION: I am attempting to use the.Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication. To perform the handshake, I am using this code: SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback); sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false); Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store. I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store. It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too. Any ideas? ANSWER: It does not look like this is currently possible using the.NET libraries. I solved it by using the Mono class library implementation of System.Security.SslStream, which gives better access to overriding the servers behavior during the handshake.
[ "c#", ".net", "ssl", "sspi" ]
3
3
3,006
3
0
2008-09-10T11:49:44.593000
2008-09-15T12:10:32.083000
53,827
53,900
Checking available stack size in C
I'm using MinGW with GCC 3.4.5 (mingw-special vista r3). My C application uses a lot of stack so I was wondering is there any way I can tell programatically how much stack is remaining so I can cleanly handle the situation if I find that I'm about to run out. If not what other ways would you work around the problem of potentially running out of stack space? I've no idea what size of stack I'll start with so would need to identify that programatically also.
Raymond Chen ( The Old New Thing ) has a good answer to this sort of question: If you have to ask, you're probably doing something wrong. Here's some Win32 details on stack allocation: MSDN. If you think you might be limited by stack space, you will almost certainly be limited by available virtual memory, in which case, you will need to find a different solution. What exactly are you trying to do?
Checking available stack size in C I'm using MinGW with GCC 3.4.5 (mingw-special vista r3). My C application uses a lot of stack so I was wondering is there any way I can tell programatically how much stack is remaining so I can cleanly handle the situation if I find that I'm about to run out. If not what other ways would you work around the problem of potentially running out of stack space? I've no idea what size of stack I'll start with so would need to identify that programatically also.
TITLE: Checking available stack size in C QUESTION: I'm using MinGW with GCC 3.4.5 (mingw-special vista r3). My C application uses a lot of stack so I was wondering is there any way I can tell programatically how much stack is remaining so I can cleanly handle the situation if I find that I'm about to run out. If not what other ways would you work around the problem of potentially running out of stack space? I've no idea what size of stack I'll start with so would need to identify that programatically also. ANSWER: Raymond Chen ( The Old New Thing ) has a good answer to this sort of question: If you have to ask, you're probably doing something wrong. Here's some Win32 details on stack allocation: MSDN. If you think you might be limited by stack space, you will almost certainly be limited by available virtual memory, in which case, you will need to find a different solution. What exactly are you trying to do?
[ "c", "stack", "mingw" ]
46
2
86,092
9
0
2008-09-10T11:53:16.160000
2008-09-10T12:38:18.407000
53,841
53,875
Can XpsDocuments be serialized to XML for storage in a database?
And, if not, is the only other alternative a blob?
XPS documents are zip files that contain XML. You could extract the contents of the zip file and store that in the database, but then you would need to unzip and re-zip every time data came in or out of the database. Edit: In other words, not in any practical manner.
Can XpsDocuments be serialized to XML for storage in a database? And, if not, is the only other alternative a blob?
TITLE: Can XpsDocuments be serialized to XML for storage in a database? QUESTION: And, if not, is the only other alternative a blob? ANSWER: XPS documents are zip files that contain XML. You could extract the contents of the zip file and store that in the database, but then you would need to unzip and re-zip every time data came in or out of the database. Edit: In other words, not in any practical manner.
[ "sql-server", "xpsdocument" ]
0
1
212
1
0
2008-09-10T12:04:29.767000
2008-09-10T12:27:04.050000
53,845
53,856
Java Compiler Options to produce .exe files
What compiler (I'm using gcj 4.x) options should I use to generate an "exe" file for my java application to run in windows?
To compile the Java program MyJavaProg.java, type: gcj -c -g -O MyJavaProg.java To link it, use the command: gcj --main=MyJavaProg -o MyJavaProg MyJavaProg.o and then linking to create an executable mycxxprog.exe g++ -o mycxxprog.exe mycxxprog.o
Java Compiler Options to produce .exe files What compiler (I'm using gcj 4.x) options should I use to generate an "exe" file for my java application to run in windows?
TITLE: Java Compiler Options to produce .exe files QUESTION: What compiler (I'm using gcj 4.x) options should I use to generate an "exe" file for my java application to run in windows? ANSWER: To compile the Java program MyJavaProg.java, type: gcj -c -g -O MyJavaProg.java To link it, use the command: gcj --main=MyJavaProg -o MyJavaProg MyJavaProg.o and then linking to create an executable mycxxprog.exe g++ -o mycxxprog.exe mycxxprog.o
[ "java", "windows", "compiler-construction", "gnu" ]
7
12
10,699
1
0
2008-09-10T12:09:26.650000
2008-09-10T12:13:06.767000
53,849
53,863
How do I tokenize a string in C++?
Java has a convenient split method: String str = "The quick brown fox"; String[] results = str.split(" "); Is there an easy way to do this in C++?
C++ standard library algorithms are pretty universally based around iterators rather than concrete containers. Unfortunately this makes it hard to provide a Java-like split function in the C++ standard library, even though nobody argues that this would be convenient. But what would its return type be? std::vector >? Maybe, but then we’re forced to perform (potentially redundant and costly) allocations. Instead, C++ offers a plethora of ways to split strings based on arbitrarily complex delimiters, but none of them is encapsulated as nicely as in other languages. The numerous ways fill whole blog posts. At its simplest, you could iterate using std::string::find until you hit std::string::npos, and extract the contents using std::string::substr. A more fluid (and idiomatic, but basic) version for splitting on whitespace would use a std::istringstream: auto iss = std::istringstream{"The quick brown fox"}; auto str = std::string{}; while (iss >> str) { process(str); } Using std::istream_iterator s, the contents of the string stream could also be copied into a vector using its iterator range constructor. Multiple libraries (such as Boost.Tokenizer ) offer specific tokenisers. More advanced splitting require regular expressions. C++ provides the std::regex_token_iterator for this purpose in particular: auto const str = "The quick brown fox"s; auto const re = std::regex{R"(\s+)"}; auto const vec = std::vector ( std::sregex_token_iterator{begin(str), end(str), re, -1}, std::sregex_token_iterator{} );
How do I tokenize a string in C++? Java has a convenient split method: String str = "The quick brown fox"; String[] results = str.split(" "); Is there an easy way to do this in C++?
TITLE: How do I tokenize a string in C++? QUESTION: Java has a convenient split method: String str = "The quick brown fox"; String[] results = str.split(" "); Is there an easy way to do this in C++? ANSWER: C++ standard library algorithms are pretty universally based around iterators rather than concrete containers. Unfortunately this makes it hard to provide a Java-like split function in the C++ standard library, even though nobody argues that this would be convenient. But what would its return type be? std::vector >? Maybe, but then we’re forced to perform (potentially redundant and costly) allocations. Instead, C++ offers a plethora of ways to split strings based on arbitrarily complex delimiters, but none of them is encapsulated as nicely as in other languages. The numerous ways fill whole blog posts. At its simplest, you could iterate using std::string::find until you hit std::string::npos, and extract the contents using std::string::substr. A more fluid (and idiomatic, but basic) version for splitting on whitespace would use a std::istringstream: auto iss = std::istringstream{"The quick brown fox"}; auto str = std::string{}; while (iss >> str) { process(str); } Using std::istream_iterator s, the contents of the string stream could also be copied into a vector using its iterator range constructor. Multiple libraries (such as Boost.Tokenizer ) offer specific tokenisers. More advanced splitting require regular expressions. C++ provides the std::regex_token_iterator for this purpose in particular: auto const str = "The quick brown fox"s; auto const re = std::regex{R"(\s+)"}; auto const vec = std::vector ( std::sregex_token_iterator{begin(str), end(str), re, -1}, std::sregex_token_iterator{} );
[ "c++", "string", "split", "tokenize" ]
478
177
658,924
37
0
2008-09-10T12:10:25.527000
2008-09-10T12:18:25.383000
53,857
53,864
How to create Virtual COM ports
I am trying to implement a software Null Modem. Any suggestion how to create virtual COM Ports? Looking for tips, strategy, known techniques, etc..
I have used the open-source com0com on windows for this before, and it worked well. The related com2tcp project was more challenging to get working reliably.
How to create Virtual COM ports I am trying to implement a software Null Modem. Any suggestion how to create virtual COM Ports? Looking for tips, strategy, known techniques, etc..
TITLE: How to create Virtual COM ports QUESTION: I am trying to implement a software Null Modem. Any suggestion how to create virtual COM Ports? Looking for tips, strategy, known techniques, etc.. ANSWER: I have used the open-source com0com on windows for this before, and it worked well. The related com2tcp project was more challenging to get working reliably.
[ "serial-port", "virtual-serial-port" ]
4
4
3,164
1
0
2008-09-10T12:14:05.750000
2008-09-10T12:18:50.467000
53,873
53,938
Can a Linq query retrieve BLOBs from a Sql Database?
Can a Linq query retrieve BLOBs from a Sql Database? And how do they come out?
LINQ-To-SQL classes create properties of type System.Data.Linq.Binary for all binary and varbinary fields in SQL-Server. The Binary type has a.ToArray() method that returns a byte[], and its constructor can take a byte[]. Older versions of SQLMetal generated properties of type byte[], but the problem with these was that they failed in any joins. I think that's the main reason they replaced it with the IEquatable Binary type.
Can a Linq query retrieve BLOBs from a Sql Database? Can a Linq query retrieve BLOBs from a Sql Database? And how do they come out?
TITLE: Can a Linq query retrieve BLOBs from a Sql Database? QUESTION: Can a Linq query retrieve BLOBs from a Sql Database? And how do they come out? ANSWER: LINQ-To-SQL classes create properties of type System.Data.Linq.Binary for all binary and varbinary fields in SQL-Server. The Binary type has a.ToArray() method that returns a byte[], and its constructor can take a byte[]. Older versions of SQLMetal generated properties of type byte[], but the problem with these was that they failed in any joins. I think that's the main reason they replaced it with the IEquatable Binary type.
[ "sql-server", "linq-to-sql", "blob" ]
5
5
3,301
2
0
2008-09-10T12:26:12.197000
2008-09-10T12:51:58.483000
53,911
53,929
In vim, how do I go back to where I was before a search?
Programming in vim I often go search for something, yank it, then go back to where I was, insert it, modify it. The problem is that after I search and find, I need to MANUALLY find my way back to where I was. Is there an automatic way to go back to where I was when I initiated my last search?
Ctrl + O takes me to the previous location. Don't know about location before the search. Edit: Also, `. will take you to the last change you made.
In vim, how do I go back to where I was before a search? Programming in vim I often go search for something, yank it, then go back to where I was, insert it, modify it. The problem is that after I search and find, I need to MANUALLY find my way back to where I was. Is there an automatic way to go back to where I was when I initiated my last search?
TITLE: In vim, how do I go back to where I was before a search? QUESTION: Programming in vim I often go search for something, yank it, then go back to where I was, insert it, modify it. The problem is that after I search and find, I need to MANUALLY find my way back to where I was. Is there an automatic way to go back to where I was when I initiated my last search? ANSWER: Ctrl + O takes me to the previous location. Don't know about location before the search. Edit: Also, `. will take you to the last change you made.
[ "vim", "vi" ]
286
449
91,518
7
0
2008-09-10T12:42:43.670000
2008-09-10T12:47:51.957000
53,913
54,031
How can I lay images out in a grid?
I'm trying to produce sheets of photographs with captions arranged in a grid using XSLT and XSL-FO. The photo URLs and captions are produced using a FOR XML query against an SQL Server database, and the number of photos returned varies from sheet to sheet. I want to lay the photos out in four columns, filling the grid from left to right and from top to bottom. In HTML I'd do this by putting each photo and caption into a div and using "float: left" to make them flow into the grid. Is there a similarly elegant method using XSL-FO?
To keep life simple I would normally setup a table for this, it's quite simple and will ensure that things get laid out right. If you wanted to do it similarly to how you would do it in HTML then you should layout block-container elements. However you decide to do it I would always recommend using the ZVON Reference site. Nice lookup of elements and available attributes, and while their XSL-FO doesn't include much in the way of explanation every page deep links to the standards document.
How can I lay images out in a grid? I'm trying to produce sheets of photographs with captions arranged in a grid using XSLT and XSL-FO. The photo URLs and captions are produced using a FOR XML query against an SQL Server database, and the number of photos returned varies from sheet to sheet. I want to lay the photos out in four columns, filling the grid from left to right and from top to bottom. In HTML I'd do this by putting each photo and caption into a div and using "float: left" to make them flow into the grid. Is there a similarly elegant method using XSL-FO?
TITLE: How can I lay images out in a grid? QUESTION: I'm trying to produce sheets of photographs with captions arranged in a grid using XSLT and XSL-FO. The photo URLs and captions are produced using a FOR XML query against an SQL Server database, and the number of photos returned varies from sheet to sheet. I want to lay the photos out in four columns, filling the grid from left to right and from top to bottom. In HTML I'd do this by putting each photo and caption into a div and using "float: left" to make them flow into the grid. Is there a similarly elegant method using XSL-FO? ANSWER: To keep life simple I would normally setup a table for this, it's quite simple and will ensure that things get laid out right. If you wanted to do it similarly to how you would do it in HTML then you should layout block-container elements. However you decide to do it I would always recommend using the ZVON Reference site. Nice lookup of elements and available attributes, and while their XSL-FO doesn't include much in the way of explanation every page deep links to the standards document.
[ "xml", "xslt", "xsl-fo" ]
2
4
563
2
0
2008-09-10T12:42:53.450000
2008-09-10T13:27:10.520000
53,945
1,863,308
Dynamically inserting javascript into HTML that uses document.write
I am currently loading a lightbox style popup that loads it's HTML from an XHR call. This content is then displayed in a 'modal' popup using element.innerHTML = content This works like a charm. In another section of this website I use a Flickr 'badge' ( http://www.elliotswan.com/2006/08/06/custom-flickr-badge-api-documentation/ ) to load flickr images dynamically. This is done including a script tag that loads a flickr javascript, which in turn does some document.write statments. Both of them work perfectly when included in the HTML. Only when loading the flickr badge code inside the lightbox, no content is rendered at all. It seems that using innerHTML to write document.write statements is taking it a step too far, but I cannot find any clue in the javascript implementations (FF2&3, IE6&7) of this behavior. Can anyone clarify if this should or shouldn't work? Thanks.
In general, script tags aren't executed when using innerHTML. In your case, this is good, because the document.write call would wipe out everything that's already in the page. However, that leaves you without whatever HTML document.write was supposed to add. jQuery's HTML manipulation methods will execute scripts in HTML for you, the trick is then capturing the calls to document.write and getting the HTML in the proper place. If it's simple enough, then something like this will do: var content = ''; document.write = function(s) { content += s; }; // execute the script $('#foo').html(markupWithScriptInIt); $('#foo.whereverTheDocumentWriteContentGoes').html(content); It gets complicated though. If the script is on another domain, it will be loaded asynchronously, so you'll have to wait until it's done to get the content. Also, what if it just writes the HTML into the middle of the fragment without a wrapper element that you can easily select? writeCapture.js (full disclosure: I wrote it) handles all of these problems. I'd recommend just using it, but at the very least you can look at the code to see how it handles everything. EDIT: Here is a page demonstrating what sounds like the effect you want.
Dynamically inserting javascript into HTML that uses document.write I am currently loading a lightbox style popup that loads it's HTML from an XHR call. This content is then displayed in a 'modal' popup using element.innerHTML = content This works like a charm. In another section of this website I use a Flickr 'badge' ( http://www.elliotswan.com/2006/08/06/custom-flickr-badge-api-documentation/ ) to load flickr images dynamically. This is done including a script tag that loads a flickr javascript, which in turn does some document.write statments. Both of them work perfectly when included in the HTML. Only when loading the flickr badge code inside the lightbox, no content is rendered at all. It seems that using innerHTML to write document.write statements is taking it a step too far, but I cannot find any clue in the javascript implementations (FF2&3, IE6&7) of this behavior. Can anyone clarify if this should or shouldn't work? Thanks.
TITLE: Dynamically inserting javascript into HTML that uses document.write QUESTION: I am currently loading a lightbox style popup that loads it's HTML from an XHR call. This content is then displayed in a 'modal' popup using element.innerHTML = content This works like a charm. In another section of this website I use a Flickr 'badge' ( http://www.elliotswan.com/2006/08/06/custom-flickr-badge-api-documentation/ ) to load flickr images dynamically. This is done including a script tag that loads a flickr javascript, which in turn does some document.write statments. Both of them work perfectly when included in the HTML. Only when loading the flickr badge code inside the lightbox, no content is rendered at all. It seems that using innerHTML to write document.write statements is taking it a step too far, but I cannot find any clue in the javascript implementations (FF2&3, IE6&7) of this behavior. Can anyone clarify if this should or shouldn't work? Thanks. ANSWER: In general, script tags aren't executed when using innerHTML. In your case, this is good, because the document.write call would wipe out everything that's already in the page. However, that leaves you without whatever HTML document.write was supposed to add. jQuery's HTML manipulation methods will execute scripts in HTML for you, the trick is then capturing the calls to document.write and getting the HTML in the proper place. If it's simple enough, then something like this will do: var content = ''; document.write = function(s) { content += s; }; // execute the script $('#foo').html(markupWithScriptInIt); $('#foo.whereverTheDocumentWriteContentGoes').html(content); It gets complicated though. If the script is on another domain, it will be loaded asynchronously, so you'll have to wait until it's done to get the content. Also, what if it just writes the HTML into the middle of the fragment without a wrapper element that you can easily select? writeCapture.js (full disclosure: I wrote it) handles all of these problems. I'd recommend just using it, but at the very least you can look at the code to see how it handles everything. EDIT: Here is a page demonstrating what sounds like the effect you want.
[ "javascript", "html", "ajax" ]
18
15
26,479
7
0
2008-09-10T12:55:43.950000
2009-12-07T22:16:56.530000
53,956
54,292
HTML to Image .tiff File
Is there a way to convert a HTML string into a Image.tiff file? I am using C#.NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a.tiff or any image that can be faxed? 3rd party software is allowed, however the cheaper the better. We are using a 3rd party fax library, that will only accept.tiff images, but if I can get the HTML to be any image I can covert it into a.tiff.
Here are some free-as-in-beer possibilities: You can use the PDFCreator printer driver that comes with ghostscript and print directly to a TIFF file or many other formats. If you have MSOffice installed, the Microsoft Office Document Image Writer will produce a file you can convert to other formats. But in general, your best bet is to print to a driver that will produce and image file of some kind or a windows meta-file format (.wmf) file. Is there some reason why you can't just print-to-fax? Does the third-party software not support a printer driver? That's unusual these days.
HTML to Image .tiff File Is there a way to convert a HTML string into a Image.tiff file? I am using C#.NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a.tiff or any image that can be faxed? 3rd party software is allowed, however the cheaper the better. We are using a 3rd party fax library, that will only accept.tiff images, but if I can get the HTML to be any image I can covert it into a.tiff.
TITLE: HTML to Image .tiff File QUESTION: Is there a way to convert a HTML string into a Image.tiff file? I am using C#.NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a.tiff or any image that can be faxed? 3rd party software is allowed, however the cheaper the better. We are using a 3rd party fax library, that will only accept.tiff images, but if I can get the HTML to be any image I can covert it into a.tiff. ANSWER: Here are some free-as-in-beer possibilities: You can use the PDFCreator printer driver that comes with ghostscript and print directly to a TIFF file or many other formats. If you have MSOffice installed, the Microsoft Office Document Image Writer will produce a file you can convert to other formats. But in general, your best bet is to print to a driver that will produce and image file of some kind or a windows meta-file format (.wmf) file. Is there some reason why you can't just print-to-fax? Does the third-party software not support a printer driver? That's unusual these days.
[ "c#", ".net", "html", "image", "tiff" ]
3
4
4,388
2
0
2008-09-10T13:00:21.897000
2008-09-10T14:59:40.190000
53,961
54,550
Locate and add project reference for .Net assembly containing a given type?
I'm working with a large (270+ project) VS.Net solution. Yes, I know this is pushing the friendship with VS but it's inherited and blah blah. Anyway, to speed up the solution load and compile time I've removed all projects that I'm not currently working on... which in turn has removed those project references from the projects I want to retain. So now I'm going through a mind numbing process of adding binary references to the retained projects so that the referenced Types can be found. Here's how I'm working at present; Attempt to compile, get thousands of errors, 'type or namespace missing' Copy the first line of the error list to the clipboard Using a perl script hooked up to a hotkey (AHK) I extract the type name from the error message and store it in the windows clipboard I paste the type name into source insight symbol browser and note the assembly containing the Type I go back to VS and add that assembly as a binary reference to the relevant project So now, after about 30 mins I'm thinking there's just got to be a quicker way...
No, there currently isn't a built-in quicker way. I would suggest not modifying the existing solution and create a new solution with new projects that duplicate (e.g. rename and edit) the projects you want to work on. If you find that the solution with the hundreds of projects is an issue for you then you'll likely just need to work on a subset. Start with a couple of new projects, add the binary (not project) reference and go from there.
Locate and add project reference for .Net assembly containing a given type? I'm working with a large (270+ project) VS.Net solution. Yes, I know this is pushing the friendship with VS but it's inherited and blah blah. Anyway, to speed up the solution load and compile time I've removed all projects that I'm not currently working on... which in turn has removed those project references from the projects I want to retain. So now I'm going through a mind numbing process of adding binary references to the retained projects so that the referenced Types can be found. Here's how I'm working at present; Attempt to compile, get thousands of errors, 'type or namespace missing' Copy the first line of the error list to the clipboard Using a perl script hooked up to a hotkey (AHK) I extract the type name from the error message and store it in the windows clipboard I paste the type name into source insight symbol browser and note the assembly containing the Type I go back to VS and add that assembly as a binary reference to the relevant project So now, after about 30 mins I'm thinking there's just got to be a quicker way...
TITLE: Locate and add project reference for .Net assembly containing a given type? QUESTION: I'm working with a large (270+ project) VS.Net solution. Yes, I know this is pushing the friendship with VS but it's inherited and blah blah. Anyway, to speed up the solution load and compile time I've removed all projects that I'm not currently working on... which in turn has removed those project references from the projects I want to retain. So now I'm going through a mind numbing process of adding binary references to the retained projects so that the referenced Types can be found. Here's how I'm working at present; Attempt to compile, get thousands of errors, 'type or namespace missing' Copy the first line of the error list to the clipboard Using a perl script hooked up to a hotkey (AHK) I extract the type name from the error message and store it in the windows clipboard I paste the type name into source insight symbol browser and note the assembly containing the Type I go back to VS and add that assembly as a binary reference to the relevant project So now, after about 30 mins I'm thinking there's just got to be a quicker way... ANSWER: No, there currently isn't a built-in quicker way. I would suggest not modifying the existing solution and create a new solution with new projects that duplicate (e.g. rename and edit) the projects you want to work on. If you find that the solution with the hundreds of projects is an issue for you then you'll likely just need to work on a subset. Start with a couple of new projects, add the binary (not project) reference and go from there.
[ "c#", ".net", "visual-studio" ]
1
1
3,834
4
0
2008-09-10T13:04:01.480000
2008-09-10T16:11:53.450000
53,965
53,993
What is wrong with this PHP regular expression?
$output = preg_replace("|(/D)(/s+)(/d+)(;)|", "//1,//3;", $output); I'm trying to replace all alphabetical character followed by one or more whitespace characters (tabs and/or spaces) followed by one or more numerical characters followed by a semicolon with the alphabetical character followed by a comma followed by the numerical digits and then the semicolon. I'll provide an example: Start: hello world 1007; End: hello world,1007;
The two | at the start and end probably are incorrect - and should both be forward-slashes. All other forward slashes should be backward slashes (and need escaping). And since PHP 4.04 $n is the preferred way of referring to a capture group. $output = preg_replace("/(\\D)\\s+(\\d+;)/", "$1,$2", $output); If you use single quotes you don't need to escape your backslashes: $output = preg_replace('/(\D)\s+(\d+;)/', '$1,$2', $output);
What is wrong with this PHP regular expression? $output = preg_replace("|(/D)(/s+)(/d+)(;)|", "//1,//3;", $output); I'm trying to replace all alphabetical character followed by one or more whitespace characters (tabs and/or spaces) followed by one or more numerical characters followed by a semicolon with the alphabetical character followed by a comma followed by the numerical digits and then the semicolon. I'll provide an example: Start: hello world 1007; End: hello world,1007;
TITLE: What is wrong with this PHP regular expression? QUESTION: $output = preg_replace("|(/D)(/s+)(/d+)(;)|", "//1,//3;", $output); I'm trying to replace all alphabetical character followed by one or more whitespace characters (tabs and/or spaces) followed by one or more numerical characters followed by a semicolon with the alphabetical character followed by a comma followed by the numerical digits and then the semicolon. I'll provide an example: Start: hello world 1007; End: hello world,1007; ANSWER: The two | at the start and end probably are incorrect - and should both be forward-slashes. All other forward slashes should be backward slashes (and need escaping). And since PHP 4.04 $n is the preferred way of referring to a capture group. $output = preg_replace("/(\\D)\\s+(\\d+;)/", "$1,$2", $output); If you use single quotes you don't need to escape your backslashes: $output = preg_replace('/(\D)\s+(\d+;)/', '$1,$2', $output);
[ "php", "regex" ]
2
6
420
3
0
2008-09-10T13:04:57.033000
2008-09-10T13:10:19.797000
53,967
54,022
How would you implement the IEnumerator interface?
I have a class that map objects to objects, but unlike dictionary it maps them both ways. I am now trying to implement a custom IEnumerator interface that iterates through the values. public class Mapper: IEnumerable, IEnumerator { C5.TreeDictionary KToTMap = new TreeDictionary (); C5.HashDictionary TToKMap = new HashDictionary (); public void Add(K key, T value) { KToTMap.Add(key, value); TToKMap.Add(value, key); } public int Count { get { return KToTMap.Count; } } public K this[T obj] { get { return TToKMap[obj]; } } public T this[K obj] { get { return KToTMap[obj]; } } public IEnumerator GetEnumerator() { return KToTMap.Values.GetEnumerator(); } public T Current { get { throw new NotImplementedException(); } } public void Dispose() { throw new NotImplementedException(); } object System.Collections.IEnumerator.Current { get { throw new NotImplementedException(); } } public bool MoveNext() {; } public void Reset() { throw new NotImplementedException(); } }
First, don't make your collection object implement IEnumerator<>. This leads to bugs. (Consider the situation where two threads are iterating over the same collection). Implementing an enumerator correctly turns out to be non-trivial, so C# 2.0 added special language support for doing it, based on the 'yield return' statement. Raymond Chen's recent series of blog posts ("The implementation of iterators in C# and its consequences") is a good place to get up to speed. Part 1: https://web.archive.org/web/20081216071723/http://blogs.msdn.com/oldnewthing/archive/2008/08/12/8849519.aspx Part 2: https://web.archive.org/web/20080907004812/http://blogs.msdn.com/oldnewthing/archive/2008/08/13/8854601.aspx Part 3: https://web.archive.org/web/20080824210655/http://blogs.msdn.com/oldnewthing/archive/2008/08/14/8862242.aspx Part 4: https://web.archive.org/web/20090207130506/http://blogs.msdn.com/oldnewthing/archive/2008/08/15/8868267.aspx
How would you implement the IEnumerator interface? I have a class that map objects to objects, but unlike dictionary it maps them both ways. I am now trying to implement a custom IEnumerator interface that iterates through the values. public class Mapper: IEnumerable, IEnumerator { C5.TreeDictionary KToTMap = new TreeDictionary (); C5.HashDictionary TToKMap = new HashDictionary (); public void Add(K key, T value) { KToTMap.Add(key, value); TToKMap.Add(value, key); } public int Count { get { return KToTMap.Count; } } public K this[T obj] { get { return TToKMap[obj]; } } public T this[K obj] { get { return KToTMap[obj]; } } public IEnumerator GetEnumerator() { return KToTMap.Values.GetEnumerator(); } public T Current { get { throw new NotImplementedException(); } } public void Dispose() { throw new NotImplementedException(); } object System.Collections.IEnumerator.Current { get { throw new NotImplementedException(); } } public bool MoveNext() {; } public void Reset() { throw new NotImplementedException(); } }
TITLE: How would you implement the IEnumerator interface? QUESTION: I have a class that map objects to objects, but unlike dictionary it maps them both ways. I am now trying to implement a custom IEnumerator interface that iterates through the values. public class Mapper: IEnumerable, IEnumerator { C5.TreeDictionary KToTMap = new TreeDictionary (); C5.HashDictionary TToKMap = new HashDictionary (); public void Add(K key, T value) { KToTMap.Add(key, value); TToKMap.Add(value, key); } public int Count { get { return KToTMap.Count; } } public K this[T obj] { get { return TToKMap[obj]; } } public T this[K obj] { get { return KToTMap[obj]; } } public IEnumerator GetEnumerator() { return KToTMap.Values.GetEnumerator(); } public T Current { get { throw new NotImplementedException(); } } public void Dispose() { throw new NotImplementedException(); } object System.Collections.IEnumerator.Current { get { throw new NotImplementedException(); } } public bool MoveNext() {; } public void Reset() { throw new NotImplementedException(); } } ANSWER: First, don't make your collection object implement IEnumerator<>. This leads to bugs. (Consider the situation where two threads are iterating over the same collection). Implementing an enumerator correctly turns out to be non-trivial, so C# 2.0 added special language support for doing it, based on the 'yield return' statement. Raymond Chen's recent series of blog posts ("The implementation of iterators in C# and its consequences") is a good place to get up to speed. Part 1: https://web.archive.org/web/20081216071723/http://blogs.msdn.com/oldnewthing/archive/2008/08/12/8849519.aspx Part 2: https://web.archive.org/web/20080907004812/http://blogs.msdn.com/oldnewthing/archive/2008/08/13/8854601.aspx Part 3: https://web.archive.org/web/20080824210655/http://blogs.msdn.com/oldnewthing/archive/2008/08/14/8862242.aspx Part 4: https://web.archive.org/web/20090207130506/http://blogs.msdn.com/oldnewthing/archive/2008/08/15/8868267.aspx
[ "c#", ".net", "collections", "ienumerable", "ienumerator" ]
12
19
17,726
5
0
2008-09-10T13:05:56.530000
2008-09-10T13:23:34.820000
53,989
109,016
Linking directly to a SWF, what are the downsides?
Usually Flash and Flex applications are embedded on in HTML using either a combination of object and embed tags, or more commonly using JavaScript. However, if you link directly to a SWF file it will open in the browser window and without looking in the address bar you can't tell that it wasn't embedded in HTML with the size set to 100% width and height. Considering the overhead of the HTML, CSS and JavaScript needed to embed a Flash or Flex application filling 100% of the browser window, what are the downsides of linking directly to the SWF file instead? What are the upsides? I can think of one upside and three downsides: you don't need the 100+ lines of HTML, JavaScript and CSS that are otherwise required, but you have no plugin detection, no version checking and you lose your best SEO option (progressive enhancement). Update don't get hung up on the 100+ lines, I simply mean that the the amount of code needed to embed a SWF is quite a lot (and I mean including libraries like SWFObject), and it's just for displaying the SWF, which can be done without a single line by linking to it directly.
Upsides for linking directly to SWF file: Faster access You know it's a flash movie even before you click on the link Skipping the html & js files (You won't use CSS to display 100% flash movie anyway) Downsides: You have little control on movie defaults. You can't use custom background colors, transparency etc. You can't use flashVars to send data to the movie from the HTML Can't use fscommand from the movie to the page Movie proportions are never the same as the user's window's aspect ratio You can't compensate for browser incompetability (The next new browser comes out and you're in trouble) No SEO No page title, bad if you want people to bookmark properly. No plugin information, download links etc. If your SWF connects to external data sources, you might have cross domain problems. Renaming the SWF file will also rename the link. Bad for versioning. In short, for a complicated application - always use the HTML. For a simple animation movie you can go either way.
Linking directly to a SWF, what are the downsides? Usually Flash and Flex applications are embedded on in HTML using either a combination of object and embed tags, or more commonly using JavaScript. However, if you link directly to a SWF file it will open in the browser window and without looking in the address bar you can't tell that it wasn't embedded in HTML with the size set to 100% width and height. Considering the overhead of the HTML, CSS and JavaScript needed to embed a Flash or Flex application filling 100% of the browser window, what are the downsides of linking directly to the SWF file instead? What are the upsides? I can think of one upside and three downsides: you don't need the 100+ lines of HTML, JavaScript and CSS that are otherwise required, but you have no plugin detection, no version checking and you lose your best SEO option (progressive enhancement). Update don't get hung up on the 100+ lines, I simply mean that the the amount of code needed to embed a SWF is quite a lot (and I mean including libraries like SWFObject), and it's just for displaying the SWF, which can be done without a single line by linking to it directly.
TITLE: Linking directly to a SWF, what are the downsides? QUESTION: Usually Flash and Flex applications are embedded on in HTML using either a combination of object and embed tags, or more commonly using JavaScript. However, if you link directly to a SWF file it will open in the browser window and without looking in the address bar you can't tell that it wasn't embedded in HTML with the size set to 100% width and height. Considering the overhead of the HTML, CSS and JavaScript needed to embed a Flash or Flex application filling 100% of the browser window, what are the downsides of linking directly to the SWF file instead? What are the upsides? I can think of one upside and three downsides: you don't need the 100+ lines of HTML, JavaScript and CSS that are otherwise required, but you have no plugin detection, no version checking and you lose your best SEO option (progressive enhancement). Update don't get hung up on the 100+ lines, I simply mean that the the amount of code needed to embed a SWF is quite a lot (and I mean including libraries like SWFObject), and it's just for displaying the SWF, which can be done without a single line by linking to it directly. ANSWER: Upsides for linking directly to SWF file: Faster access You know it's a flash movie even before you click on the link Skipping the html & js files (You won't use CSS to display 100% flash movie anyway) Downsides: You have little control on movie defaults. You can't use custom background colors, transparency etc. You can't use flashVars to send data to the movie from the HTML Can't use fscommand from the movie to the page Movie proportions are never the same as the user's window's aspect ratio You can't compensate for browser incompetability (The next new browser comes out and you're in trouble) No SEO No page title, bad if you want people to bookmark properly. No plugin information, download links etc. If your SWF connects to external data sources, you might have cross domain problems. Renaming the SWF file will also rename the link. Bad for versioning. In short, for a complicated application - always use the HTML. For a simple animation movie you can go either way.
[ "apache-flex", "flash", "embed" ]
8
7
1,522
7
0
2008-09-10T13:09:32.153000
2008-09-20T19:02:21.760000
53,997
54,008
Any good AJAX framework for Google App Engine apps?
I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine?
As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other. jQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com. Edit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it.
Any good AJAX framework for Google App Engine apps? I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine?
TITLE: Any good AJAX framework for Google App Engine apps? QUESTION: I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine? ANSWER: As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other. jQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com. Edit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it.
[ "python", "ajax", "google-app-engine" ]
14
12
11,931
11
0
2008-09-10T13:12:07.267000
2008-09-10T13:14:54.653000
54,001
54,027
Could not load type 'XXX.Global'
Migrating a project from ASP.NET 1.1 to ASP.NET 2.0 and I keep hitting this error. I don't actually need Global because I am not adding anything to it, but after I remove it I get more errors.
There are a few things you can try with this, seems to happen alot and the solution varies for everyone it seems. If you are still using the IIS virtual directory make sure its pointed to the correct directory and also check the ASP.NET version it is set to, make sure it is set to ASP.NET 2.0. Clear out your bin/debug/obj all of them. Do a Clean solution and then a Build Solution. Check your project file in a text editor and make sure where its looking for the global file is correct, sometimes it doesnt change the directory. Remove the global from the solution and add it back after saving and closing. make sure all the script tags in the ASPX file point to the correct one after. You can try running the Convert to Web Application tool, that redoes all of the code and project files. IIS Express is using the wrong root directory (see answer in VS 2012 launching app based on wrong path ) Make sure you close VS after you try them. Those are some things I know to try. Hope one of them works for you.
Could not load type 'XXX.Global' Migrating a project from ASP.NET 1.1 to ASP.NET 2.0 and I keep hitting this error. I don't actually need Global because I am not adding anything to it, but after I remove it I get more errors.
TITLE: Could not load type 'XXX.Global' QUESTION: Migrating a project from ASP.NET 1.1 to ASP.NET 2.0 and I keep hitting this error. I don't actually need Global because I am not adding anything to it, but after I remove it I get more errors. ANSWER: There are a few things you can try with this, seems to happen alot and the solution varies for everyone it seems. If you are still using the IIS virtual directory make sure its pointed to the correct directory and also check the ASP.NET version it is set to, make sure it is set to ASP.NET 2.0. Clear out your bin/debug/obj all of them. Do a Clean solution and then a Build Solution. Check your project file in a text editor and make sure where its looking for the global file is correct, sometimes it doesnt change the directory. Remove the global from the solution and add it back after saving and closing. make sure all the script tags in the ASPX file point to the correct one after. You can try running the Convert to Web Application tool, that redoes all of the code and project files. IIS Express is using the wrong root directory (see answer in VS 2012 launching app based on wrong path ) Make sure you close VS after you try them. Those are some things I know to try. Hope one of them works for you.
[ "asp.net", "visual-studio" ]
62
57
121,811
18
0
2008-09-10T13:13:14.953000
2008-09-10T13:25:51.727000
54,036
54,074
Programmatically access browser history
how can i create an application to read all my browser (firefox) history? i noticed that i have in C:\Users\user.name\AppData\Local\Mozilla\Firefox\Profiles\646vwtnu.default what looks like a sqlite database (urlclassifier3.sqlite) but i don't know if its really what is used to store de history information. i searched for examples on how to do this but didn't find anything. ps: although the title is similar i believe this question is not the same as "How do you access browser history?"
I believe places.sqlite is the one you should be looking into for history (Firefox 3). Below are a couple of Mozilla wiki entries that have some info on the subject. Mozilla 2: Unified Storage Browser History (see especially section "Database Design" here) In earlier versions of Firefox they stored history in a file called history.dat, which was encoded in a format called "Mork". This perl script by Jamie Zawinski can be used to parse Mork files.
Programmatically access browser history how can i create an application to read all my browser (firefox) history? i noticed that i have in C:\Users\user.name\AppData\Local\Mozilla\Firefox\Profiles\646vwtnu.default what looks like a sqlite database (urlclassifier3.sqlite) but i don't know if its really what is used to store de history information. i searched for examples on how to do this but didn't find anything. ps: although the title is similar i believe this question is not the same as "How do you access browser history?"
TITLE: Programmatically access browser history QUESTION: how can i create an application to read all my browser (firefox) history? i noticed that i have in C:\Users\user.name\AppData\Local\Mozilla\Firefox\Profiles\646vwtnu.default what looks like a sqlite database (urlclassifier3.sqlite) but i don't know if its really what is used to store de history information. i searched for examples on how to do this but didn't find anything. ps: although the title is similar i believe this question is not the same as "How do you access browser history?" ANSWER: I believe places.sqlite is the one you should be looking into for history (Firefox 3). Below are a couple of Mozilla wiki entries that have some info on the subject. Mozilla 2: Unified Storage Browser History (see especially section "Database Design" here) In earlier versions of Firefox they stored history in a file called history.dat, which was encoded in a format called "Mork". This perl script by Jamie Zawinski can be used to parse Mork files.
[ "firefox", "sqlite" ]
4
5
8,417
4
0
2008-09-10T13:31:03.850000
2008-09-10T13:46:36.467000
54,037
54,058
Credit card expiration dates - Inclusive or exclusive?
Say you've got a credit card number with an expiration date of 05/08 - i.e. May 2008. Does that mean the card expires on the morning of the 1st of May 2008, or the night of the 31st of May 2008?
It took me a couple of minutes to find a site that I could source for this. The card is valid until the last day of the month indicated, after the last [sic] 1 day of the next month; the card cannot be used to make a purchase if the merchant attempts to obtain an authorization. - Source Also, while looking this up, I found an interesting article on Microsoft's website using an example like this, exec summary: Access 2000 for a month/year defaults to the first day of the month, here's how to override that to calculate the end of the month like you'd want for a credit card. Additionally, this page has everything you ever wanted to know about credit cards. This is assumed to be a typo and that it should read "..., after the first day of the next month;..."
Credit card expiration dates - Inclusive or exclusive? Say you've got a credit card number with an expiration date of 05/08 - i.e. May 2008. Does that mean the card expires on the morning of the 1st of May 2008, or the night of the 31st of May 2008?
TITLE: Credit card expiration dates - Inclusive or exclusive? QUESTION: Say you've got a credit card number with an expiration date of 05/08 - i.e. May 2008. Does that mean the card expires on the morning of the 1st of May 2008, or the night of the 31st of May 2008? ANSWER: It took me a couple of minutes to find a site that I could source for this. The card is valid until the last day of the month indicated, after the last [sic] 1 day of the next month; the card cannot be used to make a purchase if the merchant attempts to obtain an authorization. - Source Also, while looking this up, I found an interesting article on Microsoft's website using an example like this, exec summary: Access 2000 for a month/year defaults to the first day of the month, here's how to override that to calculate the end of the month like you'd want for a credit card. Additionally, this page has everything you ever wanted to know about credit cards. This is assumed to be a typo and that it should read "..., after the first day of the next month;..."
[ "e-commerce", "payment" ]
134
139
228,876
10
0
2008-09-10T13:31:29.917000
2008-09-10T13:40:39.443000
54,038
54,126
Running a scheduled task in a Wordpress plug-in
I'm trying to write a Wordpress plug-in that automatically posts a blog post at a certain time of day. For example, read a bunch of RSS feeds and post a daily digest of all new entries. There are plug-ins that do something similar to what I want, but many of them rely on a cron job for the automated scheduling. I'll do that if I have to, but I was hoping there was a better way. Getting a typical Wordpress user to add a cron job isn't exactly friendly. Is there a good way to schedule a task that runs from a Wordpress plug-in? It doesn't have to run at exactly the right time.
http://codex.wordpress.org/Function_Reference/wp_schedule_event
Running a scheduled task in a Wordpress plug-in I'm trying to write a Wordpress plug-in that automatically posts a blog post at a certain time of day. For example, read a bunch of RSS feeds and post a daily digest of all new entries. There are plug-ins that do something similar to what I want, but many of them rely on a cron job for the automated scheduling. I'll do that if I have to, but I was hoping there was a better way. Getting a typical Wordpress user to add a cron job isn't exactly friendly. Is there a good way to schedule a task that runs from a Wordpress plug-in? It doesn't have to run at exactly the right time.
TITLE: Running a scheduled task in a Wordpress plug-in QUESTION: I'm trying to write a Wordpress plug-in that automatically posts a blog post at a certain time of day. For example, read a bunch of RSS feeds and post a daily digest of all new entries. There are plug-ins that do something similar to what I want, but many of them rely on a cron job for the automated scheduling. I'll do that if I have to, but I was hoping there was a better way. Getting a typical Wordpress user to add a cron job isn't exactly friendly. Is there a good way to schedule a task that runs from a Wordpress plug-in? It doesn't have to run at exactly the right time. ANSWER: http://codex.wordpress.org/Function_Reference/wp_schedule_event
[ "php", "wordpress" ]
4
3
2,676
4
0
2008-09-10T13:32:16.127000
2008-09-10T14:08:35.447000
54,043
54,049
Does anyone know a library for working with quantity/unit of measure pairs?
I would like to be able to do such things as var m1 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Pounds); var m2 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Liters); m1.ToKilograms(); m2.ToPounds(new Density(7.0, DensityType.PoundsPerGallon); If there isn't something like this already, anybody interested in doing it as an os project?
Check out the Measurement Unit Conversion Library on The Code Project.
Does anyone know a library for working with quantity/unit of measure pairs? I would like to be able to do such things as var m1 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Pounds); var m2 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Liters); m1.ToKilograms(); m2.ToPounds(new Density(7.0, DensityType.PoundsPerGallon); If there isn't something like this already, anybody interested in doing it as an os project?
TITLE: Does anyone know a library for working with quantity/unit of measure pairs? QUESTION: I would like to be able to do such things as var m1 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Pounds); var m2 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Liters); m1.ToKilograms(); m2.ToPounds(new Density(7.0, DensityType.PoundsPerGallon); If there isn't something like this already, anybody interested in doing it as an os project? ANSWER: Check out the Measurement Unit Conversion Library on The Code Project.
[ "c#", ".net", "vb.net", "metric" ]
2
4
2,553
6
0
2008-09-10T13:35:13.247000
2008-09-10T13:37:53.443000
54,047
54,056
Regex's For Developers
I've been trying to figure out a regex to allow me to search for a particular string while automatically skipping comments. Anyone have an RE like this or know of one? It doesn't even need to be sophisticated enough to skip #if 0 blocks; I just want it to skip over // and /* blocks. The converse, that is only search inside comment blocks, would be very useful too. Environment: VS 2003
This is a harder problem than it might at first appear, since you need to consider comment tokens inside strings, comment tokens that are themselves commented out etc. I wrote a string and comment parser for C#, let me see if I can dig out something that will help... I'll update if I find anything. EDIT:... ok, so I found my old 'codemasker' project. Turns out that I did this in stages, not with a single regex. Basically I inch through a source file looking for start tokens, when I find one I then look for an end-token and mask everything in between. This takes into account the context of the start token... if you find a token for "string start" then you can safely ignore comment tokens until you find the end of the string, and vice versa. Once the code is masked (I used guids as masks, and a hashtable to keep track) then you can safely do your search and replace, then finally restore the masked code. Hope that helps.
Regex's For Developers I've been trying to figure out a regex to allow me to search for a particular string while automatically skipping comments. Anyone have an RE like this or know of one? It doesn't even need to be sophisticated enough to skip #if 0 blocks; I just want it to skip over // and /* blocks. The converse, that is only search inside comment blocks, would be very useful too. Environment: VS 2003
TITLE: Regex's For Developers QUESTION: I've been trying to figure out a regex to allow me to search for a particular string while automatically skipping comments. Anyone have an RE like this or know of one? It doesn't even need to be sophisticated enough to skip #if 0 blocks; I just want it to skip over // and /* blocks. The converse, that is only search inside comment blocks, would be very useful too. Environment: VS 2003 ANSWER: This is a harder problem than it might at first appear, since you need to consider comment tokens inside strings, comment tokens that are themselves commented out etc. I wrote a string and comment parser for C#, let me see if I can dig out something that will help... I'll update if I find anything. EDIT:... ok, so I found my old 'codemasker' project. Turns out that I did this in stages, not with a single regex. Basically I inch through a source file looking for start tokens, when I find one I then look for an end-token and mask everything in between. This takes into account the context of the start token... if you find a token for "string start" then you can safely ignore comment tokens until you find the end of the string, and vice versa. Once the code is masked (I used guids as masks, and a hashtable to keep track) then you can safely do your search and replace, then finally restore the masked code. Hope that helps.
[ "c++", "regex", "visual-c++", "utilities" ]
5
4
552
4
0
2008-09-10T13:37:15.910000
2008-09-10T13:40:23.250000
54,050
54,098
How do you get the logged in Windows domain account from an ASP.NET application?
We have an ASP.NET application that manages it's own User, Roles and Permission database and we have recently added a field to the User table to hold the Windows domain account. I would like to make it so that the user doesn't have to physically log in to our application, but rather would be automatically logged in based on the currently logged in Windows domain account DOMAIN\username. We want to authenticate the Windows domain account against our own User table. This is a piece of cake to do in Windows Forms, is it possible to do this in Web Forms? I don't want the user to be prompted with a Windows challenge screen, I want our system to handle the log in. Clarification: We are using our own custom Principal object. Clarification: Not sure if it makes a difference or not, but we are using IIS7.
I did pretty much exactly what you want to do a few years ago. Im trying to find some code for it, though it was at a previous job so that code is at home. I do remember though i used this article as my starting point. You set up the LDAP provider so you can actually run a check of the user vs the LDAP. One thing to make sure of if you try the LDAP approach. In the setting file where you set up the LDAP make sure LDAP is all caps, if it is not it will not resolve.
How do you get the logged in Windows domain account from an ASP.NET application? We have an ASP.NET application that manages it's own User, Roles and Permission database and we have recently added a field to the User table to hold the Windows domain account. I would like to make it so that the user doesn't have to physically log in to our application, but rather would be automatically logged in based on the currently logged in Windows domain account DOMAIN\username. We want to authenticate the Windows domain account against our own User table. This is a piece of cake to do in Windows Forms, is it possible to do this in Web Forms? I don't want the user to be prompted with a Windows challenge screen, I want our system to handle the log in. Clarification: We are using our own custom Principal object. Clarification: Not sure if it makes a difference or not, but we are using IIS7.
TITLE: How do you get the logged in Windows domain account from an ASP.NET application? QUESTION: We have an ASP.NET application that manages it's own User, Roles and Permission database and we have recently added a field to the User table to hold the Windows domain account. I would like to make it so that the user doesn't have to physically log in to our application, but rather would be automatically logged in based on the currently logged in Windows domain account DOMAIN\username. We want to authenticate the Windows domain account against our own User table. This is a piece of cake to do in Windows Forms, is it possible to do this in Web Forms? I don't want the user to be prompted with a Windows challenge screen, I want our system to handle the log in. Clarification: We are using our own custom Principal object. Clarification: Not sure if it makes a difference or not, but we are using IIS7. ANSWER: I did pretty much exactly what you want to do a few years ago. Im trying to find some code for it, though it was at a previous job so that code is at home. I do remember though i used this article as my starting point. You set up the LDAP provider so you can actually run a check of the user vs the LDAP. One thing to make sure of if you try the LDAP approach. In the setting file where you set up the LDAP make sure LDAP is all caps, if it is not it will not resolve.
[ "asp.net", "webforms", "authentication", "ntlm" ]
8
1
5,229
8
0
2008-09-10T13:38:23.753000
2008-09-10T13:53:17.977000
54,052
161,261
How to view the contents of the Solution User Options file (.suo)
Is there any way to view the contents of the solution user options file (the.suo file that accompanies solution files)? I know it's basically formatted as a file system within the file, but I'd like to be able to view the contents so that I can figure out which aspects of my solution and customizations are causing it grow very large over time.
The.SUO file is effectively disposable. If it's getting too large, just delete it. Visual Studio will create a fresh one. If you do want to go poking around in it, it looks like an OLE Compound Document File. You should be able to use the StgOpenStorage function to get hold of an IStorage pointer.
How to view the contents of the Solution User Options file (.suo) Is there any way to view the contents of the solution user options file (the.suo file that accompanies solution files)? I know it's basically formatted as a file system within the file, but I'd like to be able to view the contents so that I can figure out which aspects of my solution and customizations are causing it grow very large over time.
TITLE: How to view the contents of the Solution User Options file (.suo) QUESTION: Is there any way to view the contents of the solution user options file (the.suo file that accompanies solution files)? I know it's basically formatted as a file system within the file, but I'd like to be able to view the contents so that I can figure out which aspects of my solution and customizations are causing it grow very large over time. ANSWER: The.SUO file is effectively disposable. If it's getting too large, just delete it. Visual Studio will create a fresh one. If you do want to go poking around in it, it looks like an OLE Compound Document File. You should be able to use the StgOpenStorage function to get hold of an IStorage pointer.
[ "visual-studio", "projects-and-solutions", "suo" ]
27
12
16,401
6
0
2008-09-10T13:39:09.963000
2008-10-02T07:55:39.740000
54,068
54,090
Any restrictions on development in Vista
I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista?
Vista Home Basic only has enough IIS features to host WCF services and does not have any of web server features for hosting static files, asp.net, etc. Here is a link to compare editions. I would recommend going with Home Premium or Ultimate depending on whether the computer will run on a domain.
Any restrictions on development in Vista I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista?
TITLE: Any restrictions on development in Vista QUESTION: I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista? ANSWER: Vista Home Basic only has enough IIS features to host WCF services and does not have any of web server features for hosting static files, asp.net, etc. Here is a link to compare editions. I would recommend going with Home Premium or Ultimate depending on whether the computer will run on a domain.
[ "iis", "windows-vista" ]
0
0
141
3
0
2008-09-10T13:45:20.650000
2008-09-10T13:51:11.820000
54,092
54,119
How to send MMS with C#
I need to send MMS thought a C# application. I have already found 2 interesting components: http://www.winwap.com http://www.nowsms.com Does anyone have experience with other third party components? Could someone explain what kind of server I need to send those MMS? Is it a classic SMTP Server?
Typically I have always done this using a 3rd party aggregator. The messages are compiled into SMIL, which is the description language for the MMS messages. These are then sent on to the aggregator who will then send them through the MMS gateway of the Network Operator. They are typically charged on a per message basis and the aggregators will buy the messages in a block from the operators. If you are trying to send an MMS message without getting charged then I am not sure how to do this, or if it is possible.
How to send MMS with C# I need to send MMS thought a C# application. I have already found 2 interesting components: http://www.winwap.com http://www.nowsms.com Does anyone have experience with other third party components? Could someone explain what kind of server I need to send those MMS? Is it a classic SMTP Server?
TITLE: How to send MMS with C# QUESTION: I need to send MMS thought a C# application. I have already found 2 interesting components: http://www.winwap.com http://www.nowsms.com Does anyone have experience with other third party components? Could someone explain what kind of server I need to send those MMS? Is it a classic SMTP Server? ANSWER: Typically I have always done this using a 3rd party aggregator. The messages are compiled into SMIL, which is the description language for the MMS messages. These are then sent on to the aggregator who will then send them through the MMS gateway of the Network Operator. They are typically charged on a per message basis and the aggregators will buy the messages in a block from the operators. If you are trying to send an MMS message without getting charged then I am not sure how to do this, or if it is possible.
[ "c#", "mms" ]
2
2
7,006
4
0
2008-09-10T13:51:31.743000
2008-09-10T14:06:04.953000
54,096
54,137
Secure session cookies in ASP.NET over HTTPS
I got a little curious after reading this /. article over hijacking HTTPS cookies. I tracked it down a bit, and a good resource I stumbled across lists a few ways to secure cookies here. Must I use adsutil, or will setting requireSSL in the httpCookies section of web.config cover session cookies in addition to all others ( covered here )? Is there anything else I should be considering to harden sessions further?
https://www.isecpartners.com/media/12009/web-session-management.pdf A 19 page white paper on "Secure Session Management with Cookies for Web Applications" They cover lots of security issues that I haven't seen all in one spot before. It's worth a read.
Secure session cookies in ASP.NET over HTTPS I got a little curious after reading this /. article over hijacking HTTPS cookies. I tracked it down a bit, and a good resource I stumbled across lists a few ways to secure cookies here. Must I use adsutil, or will setting requireSSL in the httpCookies section of web.config cover session cookies in addition to all others ( covered here )? Is there anything else I should be considering to harden sessions further?
TITLE: Secure session cookies in ASP.NET over HTTPS QUESTION: I got a little curious after reading this /. article over hijacking HTTPS cookies. I tracked it down a bit, and a good resource I stumbled across lists a few ways to secure cookies here. Must I use adsutil, or will setting requireSSL in the httpCookies section of web.config cover session cookies in addition to all others ( covered here )? Is there anything else I should be considering to harden sessions further? ANSWER: https://www.isecpartners.com/media/12009/web-session-management.pdf A 19 page white paper on "Secure Session Management with Cookies for Web Applications" They cover lots of security issues that I haven't seen all in one spot before. It's worth a read.
[ "asp.net", "security", "session", "cookies" ]
14
11
13,508
2
0
2008-09-10T13:53:12.283000
2008-09-10T14:12:26.170000
54,104
54,108
Is there any way to enable code completion for Perl in vim?
Surprisingly as you get good at vim, you can code even faster than standard IDEs such as Eclipse. But one thing I really miss is code completion, especially for long variable names and functions. Is there any way to enable code completion for Perl in vim?
Ctrl - P (Get Previous Match) and Ctrl - N (Get Next Match) are kind of pseudo code completion. They basically search the file (Backwards for Ctrl - P, Forwards for Ctrl - N ) you are editing (and any open buffers, and if you are using TAGS anything in your TAG file) for words that start with what you are typing and add a drop down list. It works surprisingly well for variables and function names, even if it isn't intellisense. Generally I use Ctrl - P as the variable or function I am looking for is usually behind in the code. Also if you keep the same copy of Vim open, it will search the files you have previously opened.
Is there any way to enable code completion for Perl in vim? Surprisingly as you get good at vim, you can code even faster than standard IDEs such as Eclipse. But one thing I really miss is code completion, especially for long variable names and functions. Is there any way to enable code completion for Perl in vim?
TITLE: Is there any way to enable code completion for Perl in vim? QUESTION: Surprisingly as you get good at vim, you can code even faster than standard IDEs such as Eclipse. But one thing I really miss is code completion, especially for long variable names and functions. Is there any way to enable code completion for Perl in vim? ANSWER: Ctrl - P (Get Previous Match) and Ctrl - N (Get Next Match) are kind of pseudo code completion. They basically search the file (Backwards for Ctrl - P, Forwards for Ctrl - N ) you are editing (and any open buffers, and if you are using TAGS anything in your TAG file) for words that start with what you are typing and add a drop down list. It works surprisingly well for variables and function names, even if it isn't intellisense. Generally I use Ctrl - P as the variable or function I am looking for is usually behind in the code. Also if you keep the same copy of Vim open, it will search the files you have previously opened.
[ "perl", "vim", "vi" ]
26
32
10,188
8
0
2008-09-10T13:57:07.983000
2008-09-10T13:59:16.263000
54,118
54,132
What is the best way to populate a menu control on a Master Page?
Database? Page variables? Enum? I'm looking for opinions here.
The ASP.NET Sitemap feature is built for that and works well in a lot of cases. If you get in a spot where you want your Menu to look different from your Sitemap, here are some workarounds. If you have a dynamic site structure, you can create a custom sitemap provider. You might get to the point where it's more trouble than it's worth, but in general populating your menu from your sitemap gives you some nice features like security trimming, in which the menu options are appropriate for the logged-in user.
What is the best way to populate a menu control on a Master Page? Database? Page variables? Enum? I'm looking for opinions here.
TITLE: What is the best way to populate a menu control on a Master Page? QUESTION: Database? Page variables? Enum? I'm looking for opinions here. ANSWER: The ASP.NET Sitemap feature is built for that and works well in a lot of cases. If you get in a spot where you want your Menu to look different from your Sitemap, here are some workarounds. If you have a dynamic site structure, you can create a custom sitemap provider. You might get to the point where it's more trouble than it's worth, but in general populating your menu from your sitemap gives you some nice features like security trimming, in which the menu options are appropriate for the logged-in user.
[ ".net-3.5", "master-pages", "sitemap", "menu" ]
3
4
3,393
8
0
2008-09-10T14:05:56.933000
2008-09-10T14:10:12.820000
54,138
76,943
Using jQuery to beautify someone else's html
I have a third-party app that creates HTML-based reports that I need to display. I have some control over how they look, but in general it's pretty primitive. I can inject some javascript, though. I'd like to try to inject some jQuery goodness into it to tidy it up some. One specific thing I would like to do is to take a table (an actual HTML ) that always contains one row and a variable number of columns and magically convert that into a tabbed view where the contents (always one that I can supply an ID if necessary) of each original table cell represents a sheet in the tabbed view. I haven't found any good (read: simple) examples of re-parenting items like this, so I'm not sure where to begin. Can someone provide some hints on how I might try this?
Given a html page like this: ` This is the contents of Column One This is the contents of Column Two This is the contents of Column Three Contents of Column Four blah blah Column Five is here the following jQuery code converts the table cells into tabs (tested in FF 3 and IE 7) $(document).ready(function() { var tabCounter = 1; $("#my-table").after(" "); $("#my-table div").appendTo("#tab-container").each(function() { var id = "fragment-" + tabCounter; $(this).attr("id", id); $("#tab-list").append(" Tab " + tabCounter + " "); tabCounter++; }); $("#tab-container > ul").tabs(); }); To get this to work I referenced the following jQuery files jquery-latest.js ui.core.js ui.tabs.js And I referenced the flora.all.css stylesheet. Basically I copied the header section from the jQuery tab example
Using jQuery to beautify someone else's html I have a third-party app that creates HTML-based reports that I need to display. I have some control over how they look, but in general it's pretty primitive. I can inject some javascript, though. I'd like to try to inject some jQuery goodness into it to tidy it up some. One specific thing I would like to do is to take a table (an actual HTML ) that always contains one row and a variable number of columns and magically convert that into a tabbed view where the contents (always one that I can supply an ID if necessary) of each original table cell represents a sheet in the tabbed view. I haven't found any good (read: simple) examples of re-parenting items like this, so I'm not sure where to begin. Can someone provide some hints on how I might try this?
TITLE: Using jQuery to beautify someone else's html QUESTION: I have a third-party app that creates HTML-based reports that I need to display. I have some control over how they look, but in general it's pretty primitive. I can inject some javascript, though. I'd like to try to inject some jQuery goodness into it to tidy it up some. One specific thing I would like to do is to take a table (an actual HTML ) that always contains one row and a variable number of columns and magically convert that into a tabbed view where the contents (always one that I can supply an ID if necessary) of each original table cell represents a sheet in the tabbed view. I haven't found any good (read: simple) examples of re-parenting items like this, so I'm not sure where to begin. Can someone provide some hints on how I might try this? ANSWER: Given a html page like this: ` This is the contents of Column One This is the contents of Column Two This is the contents of Column Three Contents of Column Four blah blah Column Five is here the following jQuery code converts the table cells into tabs (tested in FF 3 and IE 7) $(document).ready(function() { var tabCounter = 1; $("#my-table").after(" "); $("#my-table div").appendTo("#tab-container").each(function() { var id = "fragment-" + tabCounter; $(this).attr("id", id); $("#tab-list").append(" Tab " + tabCounter + " "); tabCounter++; }); $("#tab-container > ul").tabs(); }); To get this to work I referenced the following jQuery files jquery-latest.js ui.core.js ui.tabs.js And I referenced the flora.all.css stylesheet. Basically I copied the header section from the jQuery tab example
[ "jquery", "html" ]
2
4
1,549
6
0
2008-09-10T14:12:30.740000
2008-09-16T20:53:38.900000
54,142
54,146
How does the comma operator work, and what precedence does it have?
How does the comma operator work in C++? For instance, if I do: a = b, c; Does a end up equaling b or c? (Yes, I know this is easy to test - just documenting on here for someone to find the answer quickly.) Update: This question has exposed a nuance when using the comma operator. Just to document this: a = b, c; // a is set to the value of b! a = (b, c); // a is set to the value of c! This question was actually inspired by a typo in code. What was intended to be a = b; c = d; Turned into a = b, // <- Note comma typo! c = d;
It would be equal to b. The comma operator has a lower precedence than assignment.
How does the comma operator work, and what precedence does it have? How does the comma operator work in C++? For instance, if I do: a = b, c; Does a end up equaling b or c? (Yes, I know this is easy to test - just documenting on here for someone to find the answer quickly.) Update: This question has exposed a nuance when using the comma operator. Just to document this: a = b, c; // a is set to the value of b! a = (b, c); // a is set to the value of c! This question was actually inspired by a typo in code. What was intended to be a = b; c = d; Turned into a = b, // <- Note comma typo! c = d;
TITLE: How does the comma operator work, and what precedence does it have? QUESTION: How does the comma operator work in C++? For instance, if I do: a = b, c; Does a end up equaling b or c? (Yes, I know this is easy to test - just documenting on here for someone to find the answer quickly.) Update: This question has exposed a nuance when using the comma operator. Just to document this: a = b, c; // a is set to the value of b! a = (b, c); // a is set to the value of c! This question was actually inspired by a typo in code. What was intended to be a = b; c = d; Turned into a = b, // <- Note comma typo! c = d; ANSWER: It would be equal to b. The comma operator has a lower precedence than assignment.
[ "c++", "operator-precedence", "comma-operator" ]
199
78
69,074
9
0
2008-09-10T14:13:39.550000
2008-09-10T14:14:27.910000
54,169
54,191
What do you think will be the level of usage of Silverlight 1 year from now?
There is a lot of buzz about Microsoft Silverlight, especially after the Olympics. Also H264 will be supported in a future version. Where do you think Silverlight will be 1 year from now?
They were saying they were getting 1.5 million downloads per day back in March 2008, and that was before the Olympics and the Democratic National Convention. So, unless my math is off, that's more than 4 people. I'd expect to see it show up as a recommended Windows update, and possible included with IE8 or something in the future.
What do you think will be the level of usage of Silverlight 1 year from now? There is a lot of buzz about Microsoft Silverlight, especially after the Olympics. Also H264 will be supported in a future version. Where do you think Silverlight will be 1 year from now?
TITLE: What do you think will be the level of usage of Silverlight 1 year from now? QUESTION: There is a lot of buzz about Microsoft Silverlight, especially after the Olympics. Also H264 will be supported in a future version. Where do you think Silverlight will be 1 year from now? ANSWER: They were saying they were getting 1.5 million downloads per day back in March 2008, and that was before the Olympics and the Democratic National Convention. So, unless my math is off, that's more than 4 people. I'd expect to see it show up as a recommended Windows update, and possible included with IE8 or something in the future.
[ "silverlight" ]
1
5
621
12
0
2008-09-10T14:21:22.200000
2008-09-10T14:28:23.963000
54,176
66,519
What are the advantages/disadvantages of using a CTE?
I'm looking at improving the performance of some SQL, currently CTEs are being used and referenced multiple times in the script. Would I get improvements using a table variable instead? (Can't use a temporary table as the code is within functions).
You'll really have to performance test - There is no Yes/No answer. As per Andy Living's post above links to, a CTE is just shorthand for a query or subquery. If you are calling it twice or more in the same function, you might get better performance if you fill a table variable and then join to/select from that. However, as table variables take up space somewhere, and don't have indexes/statistics (With the exception of any declared primary key on the table variable) there's no way of saying which will be faster. They both have costs and savings, and which is the best way depends on the data they pull in and what they do with it. I've been in your situation, and after testing for speed under various conditions - Some functions used CTEs, and others used table variables.
What are the advantages/disadvantages of using a CTE? I'm looking at improving the performance of some SQL, currently CTEs are being used and referenced multiple times in the script. Would I get improvements using a table variable instead? (Can't use a temporary table as the code is within functions).
TITLE: What are the advantages/disadvantages of using a CTE? QUESTION: I'm looking at improving the performance of some SQL, currently CTEs are being used and referenced multiple times in the script. Would I get improvements using a table variable instead? (Can't use a temporary table as the code is within functions). ANSWER: You'll really have to performance test - There is no Yes/No answer. As per Andy Living's post above links to, a CTE is just shorthand for a query or subquery. If you are calling it twice or more in the same function, you might get better performance if you fill a table variable and then join to/select from that. However, as table variables take up space somewhere, and don't have indexes/statistics (With the exception of any declared primary key on the table variable) there's no way of saying which will be faster. They both have costs and savings, and which is the best way depends on the data they pull in and what they do with it. I've been in your situation, and after testing for speed under various conditions - Some functions used CTEs, and others used table variables.
[ "sql", "sql-server", "common-table-expression" ]
9
10
30,260
5
0
2008-09-10T14:22:33.337000
2008-09-15T20:17:54.313000
54,179
54,206
What should be considered when building a Recommendation Engine?
I've read the book Programming Collective Intelligence and found it fascinating. I'd recently heard about a challenge amazon had posted to the world to come up with a better recommendation engine for their system. The winner apparently produced the best algorithm by limiting the amount of information that was being fed to it. As a first rule of thumb I guess... " More information is not necessarily better when it comes to fuzzy algorithms." I know's it's subjective, but ultimately it's a measurable thing (clicks in response to recommendations). Since most of us are dealing with the web these days and search can be considered a form of recommendation... I suspect I'm not the only one who'd appreciate other peoples ideas on this. In a nutshell, "What is the best way to build a recommendation?"
You don't want to use "overall popularity" unless you have no information about the user. Instead, you want to align this user with similar users and weight accordingly. This is exactly what Bayesian Inference does. In English, it means adjusting the overall probability you'll like something (the average rating) with ratings from other people who generally vote your way as well. Another piece of advice, but this time ad hoc: I find that there are people where if they like something I will almost assuredly not like it. I don't know if this effect is real or imagined, but it might be fun to build in a kind of "negative effect" instead of just clumping people by similarity. Finally there's a company specializing in exactly this called SenseArray. The owner ( Ian Clarke of freenet fame ) is very approachable. You can use my name if you call him up.
What should be considered when building a Recommendation Engine? I've read the book Programming Collective Intelligence and found it fascinating. I'd recently heard about a challenge amazon had posted to the world to come up with a better recommendation engine for their system. The winner apparently produced the best algorithm by limiting the amount of information that was being fed to it. As a first rule of thumb I guess... " More information is not necessarily better when it comes to fuzzy algorithms." I know's it's subjective, but ultimately it's a measurable thing (clicks in response to recommendations). Since most of us are dealing with the web these days and search can be considered a form of recommendation... I suspect I'm not the only one who'd appreciate other peoples ideas on this. In a nutshell, "What is the best way to build a recommendation?"
TITLE: What should be considered when building a Recommendation Engine? QUESTION: I've read the book Programming Collective Intelligence and found it fascinating. I'd recently heard about a challenge amazon had posted to the world to come up with a better recommendation engine for their system. The winner apparently produced the best algorithm by limiting the amount of information that was being fed to it. As a first rule of thumb I guess... " More information is not necessarily better when it comes to fuzzy algorithms." I know's it's subjective, but ultimately it's a measurable thing (clicks in response to recommendations). Since most of us are dealing with the web these days and search can be considered a form of recommendation... I suspect I'm not the only one who'd appreciate other peoples ideas on this. In a nutshell, "What is the best way to build a recommendation?" ANSWER: You don't want to use "overall popularity" unless you have no information about the user. Instead, you want to align this user with similar users and weight accordingly. This is exactly what Bayesian Inference does. In English, it means adjusting the overall probability you'll like something (the average rating) with ratings from other people who generally vote your way as well. Another piece of advice, but this time ad hoc: I find that there are people where if they like something I will almost assuredly not like it. I don't know if this effect is real or imagined, but it might be fun to build in a kind of "negative effect" instead of just clumping people by similarity. Finally there's a company specializing in exactly this called SenseArray. The owner ( Ian Clarke of freenet fame ) is very approachable. You can use my name if you call him up.
[ "algorithm", "recommendation-engine" ]
19
5
4,412
4
0
2008-09-10T14:23:16.690000
2008-09-10T14:31:42.143000
54,184
54,245
Best tool to monitor network connection bandwidth
I'm looking for a very simple tool to monitor the bandwidth of all my applications. No need for extra features like traffic spying, I'm just interested by bandwidth. I already know Wireshark (which is great), but what I'm looking for is more something like TcpView (great tool from Sysinternals) with current bandwidth indication. PS: I'm interested by Windows tools only
Try NetLimiter, which is great for that and also allows you to limit bandwidth usage so that you can test your app in reduced bandwidth scenarios.
Best tool to monitor network connection bandwidth I'm looking for a very simple tool to monitor the bandwidth of all my applications. No need for extra features like traffic spying, I'm just interested by bandwidth. I already know Wireshark (which is great), but what I'm looking for is more something like TcpView (great tool from Sysinternals) with current bandwidth indication. PS: I'm interested by Windows tools only
TITLE: Best tool to monitor network connection bandwidth QUESTION: I'm looking for a very simple tool to monitor the bandwidth of all my applications. No need for extra features like traffic spying, I'm just interested by bandwidth. I already know Wireshark (which is great), but what I'm looking for is more something like TcpView (great tool from Sysinternals) with current bandwidth indication. PS: I'm interested by Windows tools only ANSWER: Try NetLimiter, which is great for that and also allows you to limit bandwidth usage so that you can test your app in reduced bandwidth scenarios.
[ "windows", "networking" ]
5
8
3,870
1
0
2008-09-10T14:26:00.787000
2008-09-10T14:42:45.410000
54,188
54,444
Are C++ Reads and Writes of an int Atomic?
I have two threads, one updating an int and one reading it. This is a statistic value where the order of the reads and writes is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of a value = 0x0000FFFF that gets incremented value of 0x00010000. Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this to happen. I've always synchronized these types of accesses, but was curious what the community thinks.
At first one might think that reads and writes of the native machine size are atomic but there are a number of issues to deal with including cache coherency between processors/cores. Use atomic operations like Interlocked* on Windows and the equivalent on Linux. C++0x will have an "atomic" template to wrap these in a nice and cross-platform interface. For now if you are using a platform abstraction layer it may provide these functions. ACE does, see the class template ACE_Atomic_Op.
Are C++ Reads and Writes of an int Atomic? I have two threads, one updating an int and one reading it. This is a statistic value where the order of the reads and writes is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of a value = 0x0000FFFF that gets incremented value of 0x00010000. Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this to happen. I've always synchronized these types of accesses, but was curious what the community thinks.
TITLE: Are C++ Reads and Writes of an int Atomic? QUESTION: I have two threads, one updating an int and one reading it. This is a statistic value where the order of the reads and writes is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of a value = 0x0000FFFF that gets incremented value of 0x00010000. Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this to happen. I've always synchronized these types of accesses, but was curious what the community thinks. ANSWER: At first one might think that reads and writes of the native machine size are atomic but there are a number of issues to deal with including cache coherency between processors/cores. Use atomic operations like Interlocked* on Windows and the equivalent on Linux. C++0x will have an "atomic" template to wrap these in a nice and cross-platform interface. For now if you are using a platform abstraction layer it may provide these functions. ACE does, see the class template ACE_Atomic_Op.
[ "c++", "multithreading", "synchronization", "atomic" ]
97
48
40,877
16
0
2008-09-10T14:26:41.423000
2008-09-10T15:37:49.553000
54,199
56,337
How to implement Repository pattern withe LinqToEntities?
How to implement Repository pattern withe LinqToEntities how to implement the interface
I do the following: A service layer contains my business objects. It is passed the repository via an Inversion of Control (Castle Windor is my usual choice). The repository is in charge of mapping between the business objects and my entity framework objects. The advantages: You have no problems with object state or the context of the EF objects because you are just loading them during data manipulation on the repository side. This eases the situation when passing them to WCF/Web-Services. The disadvantages: You are losing some of the tracking functionality of Entity Framework, you have to manually load the data object (ef objects), possibly if required manually to optimistic concurrency checks (via a timestamp on the business object for example). But generally I prefer this solution, because it is possible to later change the repository. It allows me to have different repositories (for example my user object is actually using the ASPNetAuthenticationRepository instead of the EntityFrameworkRepository) but for my service layer it's transparent. With regards to the interface, I would use the business objects from the service layer as your parameter objects and don't let those EF objects out of the repository layer. Hope that helps
How to implement Repository pattern withe LinqToEntities? How to implement Repository pattern withe LinqToEntities how to implement the interface
TITLE: How to implement Repository pattern withe LinqToEntities? QUESTION: How to implement Repository pattern withe LinqToEntities how to implement the interface ANSWER: I do the following: A service layer contains my business objects. It is passed the repository via an Inversion of Control (Castle Windor is my usual choice). The repository is in charge of mapping between the business objects and my entity framework objects. The advantages: You have no problems with object state or the context of the EF objects because you are just loading them during data manipulation on the repository side. This eases the situation when passing them to WCF/Web-Services. The disadvantages: You are losing some of the tracking functionality of Entity Framework, you have to manually load the data object (ef objects), possibly if required manually to optimistic concurrency checks (via a timestamp on the business object for example). But generally I prefer this solution, because it is possible to later change the repository. It allows me to have different repositories (for example my user object is actually using the ASPNetAuthenticationRepository instead of the EntityFrameworkRepository) but for my service layer it's transparent. With regards to the interface, I would use the business objects from the service layer as your parameter objects and don't let those EF objects out of the repository layer. Hope that helps
[ ".net", ".net-3.5", "linq-to-entities" ]
2
1
1,334
2
0
2008-09-10T14:30:59.977000
2008-09-11T11:44:13.007000
54,200
54,213
Encrypting appSettings in web.config
I am developing a web app which requires a username and password to be stored in the web.Config, it also refers to some URLs which will be requested by the web app itself and never the client. I know the.Net framework will not allow a web.config file to be served, however I still think its bad practice to leave this sort of information in plain text. Everything I have read so far requires me to use a command line switch or to store values in the registry of the server. I have access to neither of these as the host is online and I have only FTP and Control Panel (helm) access. Can anyone recommend any good, free encryption DLL's or methods which I can use? I'd rather not develop my own! Thanks for the feedback so far guys but I am not able to issue commands and and not able to edit the registry. Its going to have to be an encryption util/helper but just wondering which one!
Encrypting and Decrypting Configuration Sections (ASP.NET) on MSDN Encrypting Web.Config Values in ASP.NET 2.0 on ScottGu's blog Encrypting Custom Configuration Sections on K. Scott Allen's blog EDIT: If you can't use asp utility, you can encrypt config file using SectionInformation.ProtectSection method. Sample on codeproject: Encryption of Connection Strings inside the Web.config in ASP.Net 2.0
Encrypting appSettings in web.config I am developing a web app which requires a username and password to be stored in the web.Config, it also refers to some URLs which will be requested by the web app itself and never the client. I know the.Net framework will not allow a web.config file to be served, however I still think its bad practice to leave this sort of information in plain text. Everything I have read so far requires me to use a command line switch or to store values in the registry of the server. I have access to neither of these as the host is online and I have only FTP and Control Panel (helm) access. Can anyone recommend any good, free encryption DLL's or methods which I can use? I'd rather not develop my own! Thanks for the feedback so far guys but I am not able to issue commands and and not able to edit the registry. Its going to have to be an encryption util/helper but just wondering which one!
TITLE: Encrypting appSettings in web.config QUESTION: I am developing a web app which requires a username and password to be stored in the web.Config, it also refers to some URLs which will be requested by the web app itself and never the client. I know the.Net framework will not allow a web.config file to be served, however I still think its bad practice to leave this sort of information in plain text. Everything I have read so far requires me to use a command line switch or to store values in the registry of the server. I have access to neither of these as the host is online and I have only FTP and Control Panel (helm) access. Can anyone recommend any good, free encryption DLL's or methods which I can use? I'd rather not develop my own! Thanks for the feedback so far guys but I am not able to issue commands and and not able to edit the registry. Its going to have to be an encryption util/helper but just wondering which one! ANSWER: Encrypting and Decrypting Configuration Sections (ASP.NET) on MSDN Encrypting Web.Config Values in ASP.NET 2.0 on ScottGu's blog Encrypting Custom Configuration Sections on K. Scott Allen's blog EDIT: If you can't use asp utility, you can encrypt config file using SectionInformation.ProtectSection method. Sample on codeproject: Encryption of Connection Strings inside the Web.config in ASP.Net 2.0
[ ".net", "security", "encryption", "web-applications", "appsettings" ]
15
21
39,505
4
0
2008-09-10T14:31:04.657000
2008-09-10T14:33:39.877000
54,207
307,039
Soap logging in .net
I have an internal enterprise app that currently consumes 10 different web services. They're consumed via old style "Web References" instead of using WCF. The problem I'm having is trying to work with the other teams in the company who are authoring the services I'm consuming. I found I needed to capture the exact SOAP messages that I'm sending and receiving. I did this by creating a new attribute that extends SoapExtensionAttribute. I then just add that attribute to the service method in the generated Reference.cs file. This works, but is painful for two reasons. First, it's a generated file so anything I do in there can be overwritten. Second, I have to remember to remove the attribute before checking in the file. Is There a better way to capture the exact SOAP messages that I am sending and receiving?
This seems to be a common question, as I just asked it and was told to look here. You don't have to edit the generated Reference.cs. You can reference the extension in your application's app.config.
Soap logging in .net I have an internal enterprise app that currently consumes 10 different web services. They're consumed via old style "Web References" instead of using WCF. The problem I'm having is trying to work with the other teams in the company who are authoring the services I'm consuming. I found I needed to capture the exact SOAP messages that I'm sending and receiving. I did this by creating a new attribute that extends SoapExtensionAttribute. I then just add that attribute to the service method in the generated Reference.cs file. This works, but is painful for two reasons. First, it's a generated file so anything I do in there can be overwritten. Second, I have to remember to remove the attribute before checking in the file. Is There a better way to capture the exact SOAP messages that I am sending and receiving?
TITLE: Soap logging in .net QUESTION: I have an internal enterprise app that currently consumes 10 different web services. They're consumed via old style "Web References" instead of using WCF. The problem I'm having is trying to work with the other teams in the company who are authoring the services I'm consuming. I found I needed to capture the exact SOAP messages that I'm sending and receiving. I did this by creating a new attribute that extends SoapExtensionAttribute. I then just add that attribute to the service method in the generated Reference.cs file. This works, but is painful for two reasons. First, it's a generated file so anything I do in there can be overwritten. Second, I have to remember to remove the attribute before checking in the file. Is There a better way to capture the exact SOAP messages that I am sending and receiving? ANSWER: This seems to be a common question, as I just asked it and was told to look here. You don't have to edit the generated Reference.cs. You can reference the extension in your application's app.config.
[ "c#", "web-services", "soap" ]
4
4
5,164
5
0
2008-09-10T14:32:19.553000
2008-11-20T22:11:24.283000
54,217
54,241
Ajax XMLHttpRequest object limit
Is there a security limit to the number of Ajax XMLHttpRequest objects you can create on a single page? If so, does this vary from one browser to another?
I don't think so, but there's a limit of two simultaneous HTTP connections per domain per client (you can override this in Firefox, but practically no one does so).
Ajax XMLHttpRequest object limit Is there a security limit to the number of Ajax XMLHttpRequest objects you can create on a single page? If so, does this vary from one browser to another?
TITLE: Ajax XMLHttpRequest object limit QUESTION: Is there a security limit to the number of Ajax XMLHttpRequest objects you can create on a single page? If so, does this vary from one browser to another? ANSWER: I don't think so, but there's a limit of two simultaneous HTTP connections per domain per client (you can override this in Firefox, but practically no one does so).
[ "ajax" ]
3
1
1,062
3
0
2008-09-10T14:34:44.103000
2008-09-10T14:42:28.437000
54,219
54,258
How should I handle a situation where I need to store several unrelated types but provide specific types on demand?
I'm working on an editor for files that are used by an important internal testing tool we use. The tool itself is large, complicated, and refactoring or rewriting would take more resources than we are able to devote to it for the forseeable future, so my hands are tied when it comes to large modifications. I must use a.NET language. The files are XML serialized versions of four classes that are used by the tool (let's call them A, B, C, and D). The classes form a tree structure when all is well. Our editor works by loading a set of files, deserializing them, working out the relationships between them, and keeping track of any bad states it can find. The idea is for us to move away from hand-editing these files, which introduces tons of errors. For a particular type of error, I'd like to maintain a collection of all files that have the problem. All four classes can have the problem, and I'd like to reduce duplication of code as much as possible. An important requirement is the user needs to be able to get the items in sets; for example, they need to get all A objects with an error, and telling them to iterate over the whole collection and pick out what they want is unacceptable compared to a GetAs() method. So, my first thought was to make a generic item that related the deserialized object and some metadata to indicate the error: public class ErrorItem { public T Item { get; set; } public Metadata Metadata { get; set; } } Then, I'd have a collection class that could hold all of the error items, with helper methods to extract the items of a specific class when the user needs them. This is where the trouble starts. None of the classes inherit from a common ancestor (other than Object ). This was probably a mistake of the initial design, but I've spent a few days thinking about it and the classes really don't have much in common other than a GUID property that uniquely identifies each item so I can see why the original designer did not relate them through inheritance. This means that the unified error collection would need to store ErrorItem objects, since I don't have a base class or interface to restrict what comes in. However, this makes the idea of this unified collection a little sketchy to me: Public Class ErrorCollection { public ErrorItem AllItems { get; set; } } However, this has consequences on the public interface. What I really want is to return the appropriate ErrorItem generic type like this: public ErrorItem [] GetA() This is impossible because I can only store ErrorItem! I've gone over some workarounds in my head; mostly they include creating a new ErrorItem of the appropriate type on-the-fly, but it just feels kind of ugly. Another thought has been using a Dictionary to keep items organized by type, but it still doesn't seem right. Is there some kind of pattern that might help me here? I know the easiest way to solve this is to add a base class that A, B, C, and D derive from, but I'm trying to have as small an impact on the original tool as possible. Is the cost of any workaround great enough that I should push to change the initial tool?
If A, B, C and D have nothing in common then adding a base class won't really get you anything. It will just be an empty class and in effect will be the same as object. I'd just create an ErrorItem class without the generics, make Item an object and do some casting when you want to use the objects referenced. If you want to use any of the properties or methods of the A, B, C or D class other than the Guid you would have had to cast them anyway.
How should I handle a situation where I need to store several unrelated types but provide specific types on demand? I'm working on an editor for files that are used by an important internal testing tool we use. The tool itself is large, complicated, and refactoring or rewriting would take more resources than we are able to devote to it for the forseeable future, so my hands are tied when it comes to large modifications. I must use a.NET language. The files are XML serialized versions of four classes that are used by the tool (let's call them A, B, C, and D). The classes form a tree structure when all is well. Our editor works by loading a set of files, deserializing them, working out the relationships between them, and keeping track of any bad states it can find. The idea is for us to move away from hand-editing these files, which introduces tons of errors. For a particular type of error, I'd like to maintain a collection of all files that have the problem. All four classes can have the problem, and I'd like to reduce duplication of code as much as possible. An important requirement is the user needs to be able to get the items in sets; for example, they need to get all A objects with an error, and telling them to iterate over the whole collection and pick out what they want is unacceptable compared to a GetAs() method. So, my first thought was to make a generic item that related the deserialized object and some metadata to indicate the error: public class ErrorItem { public T Item { get; set; } public Metadata Metadata { get; set; } } Then, I'd have a collection class that could hold all of the error items, with helper methods to extract the items of a specific class when the user needs them. This is where the trouble starts. None of the classes inherit from a common ancestor (other than Object ). This was probably a mistake of the initial design, but I've spent a few days thinking about it and the classes really don't have much in common other than a GUID property that uniquely identifies each item so I can see why the original designer did not relate them through inheritance. This means that the unified error collection would need to store ErrorItem objects, since I don't have a base class or interface to restrict what comes in. However, this makes the idea of this unified collection a little sketchy to me: Public Class ErrorCollection { public ErrorItem AllItems { get; set; } } However, this has consequences on the public interface. What I really want is to return the appropriate ErrorItem generic type like this: public ErrorItem [] GetA() This is impossible because I can only store ErrorItem! I've gone over some workarounds in my head; mostly they include creating a new ErrorItem of the appropriate type on-the-fly, but it just feels kind of ugly. Another thought has been using a Dictionary to keep items organized by type, but it still doesn't seem right. Is there some kind of pattern that might help me here? I know the easiest way to solve this is to add a base class that A, B, C, and D derive from, but I'm trying to have as small an impact on the original tool as possible. Is the cost of any workaround great enough that I should push to change the initial tool?
TITLE: How should I handle a situation where I need to store several unrelated types but provide specific types on demand? QUESTION: I'm working on an editor for files that are used by an important internal testing tool we use. The tool itself is large, complicated, and refactoring or rewriting would take more resources than we are able to devote to it for the forseeable future, so my hands are tied when it comes to large modifications. I must use a.NET language. The files are XML serialized versions of four classes that are used by the tool (let's call them A, B, C, and D). The classes form a tree structure when all is well. Our editor works by loading a set of files, deserializing them, working out the relationships between them, and keeping track of any bad states it can find. The idea is for us to move away from hand-editing these files, which introduces tons of errors. For a particular type of error, I'd like to maintain a collection of all files that have the problem. All four classes can have the problem, and I'd like to reduce duplication of code as much as possible. An important requirement is the user needs to be able to get the items in sets; for example, they need to get all A objects with an error, and telling them to iterate over the whole collection and pick out what they want is unacceptable compared to a GetAs() method. So, my first thought was to make a generic item that related the deserialized object and some metadata to indicate the error: public class ErrorItem { public T Item { get; set; } public Metadata Metadata { get; set; } } Then, I'd have a collection class that could hold all of the error items, with helper methods to extract the items of a specific class when the user needs them. This is where the trouble starts. None of the classes inherit from a common ancestor (other than Object ). This was probably a mistake of the initial design, but I've spent a few days thinking about it and the classes really don't have much in common other than a GUID property that uniquely identifies each item so I can see why the original designer did not relate them through inheritance. This means that the unified error collection would need to store ErrorItem objects, since I don't have a base class or interface to restrict what comes in. However, this makes the idea of this unified collection a little sketchy to me: Public Class ErrorCollection { public ErrorItem AllItems { get; set; } } However, this has consequences on the public interface. What I really want is to return the appropriate ErrorItem generic type like this: public ErrorItem [] GetA() This is impossible because I can only store ErrorItem! I've gone over some workarounds in my head; mostly they include creating a new ErrorItem of the appropriate type on-the-fly, but it just feels kind of ugly. Another thought has been using a Dictionary to keep items organized by type, but it still doesn't seem right. Is there some kind of pattern that might help me here? I know the easiest way to solve this is to add a base class that A, B, C, and D derive from, but I'm trying to have as small an impact on the original tool as possible. Is the cost of any workaround great enough that I should push to change the initial tool? ANSWER: If A, B, C and D have nothing in common then adding a base class won't really get you anything. It will just be an empty class and in effect will be the same as object. I'd just create an ErrorItem class without the generics, make Item an object and do some casting when you want to use the objects referenced. If you want to use any of the properties or methods of the A, B, C or D class other than the Guid you would have had to cast them anyway.
[ ".net", "generics" ]
1
0
121
3
0
2008-09-10T14:36:26.547000
2008-09-10T14:47:35.383000
54,221
54,234
How would I allow a user to stream video to a web application for storage?
I'd like to add some functionality to a site that would allow users to record video using their webcam and easily store it online. I don't even know if this is possible right now, but I think flash has access to local webcams running through the browser. Do you have any suggestions or resources to get me started on this? I'm primarily a java developer so If I could do it in an applet that would be great, but It may be easier to accomplish this using flash or some other technology. This would mean streaming the video back to the webserver and storing it there. Uploading a file is easy enough, but I'd rather the user not have to deal with that if it's possible. Just to be clear. I'm not talking about uploading a video. I'm talking about allowing the user to click "record" in a web application and having the video streamed to the server and stored when the user clicks "stop".
This is possible with Adobe's Flash Media Server. Red5 is an open source alternative. The protocol they use for this is RTMP.
How would I allow a user to stream video to a web application for storage? I'd like to add some functionality to a site that would allow users to record video using their webcam and easily store it online. I don't even know if this is possible right now, but I think flash has access to local webcams running through the browser. Do you have any suggestions or resources to get me started on this? I'm primarily a java developer so If I could do it in an applet that would be great, but It may be easier to accomplish this using flash or some other technology. This would mean streaming the video back to the webserver and storing it there. Uploading a file is easy enough, but I'd rather the user not have to deal with that if it's possible. Just to be clear. I'm not talking about uploading a video. I'm talking about allowing the user to click "record" in a web application and having the video streamed to the server and stored when the user clicks "stop".
TITLE: How would I allow a user to stream video to a web application for storage? QUESTION: I'd like to add some functionality to a site that would allow users to record video using their webcam and easily store it online. I don't even know if this is possible right now, but I think flash has access to local webcams running through the browser. Do you have any suggestions or resources to get me started on this? I'm primarily a java developer so If I could do it in an applet that would be great, but It may be easier to accomplish this using flash or some other technology. This would mean streaming the video back to the webserver and storing it there. Uploading a file is easy enough, but I'd rather the user not have to deal with that if it's possible. Just to be clear. I'm not talking about uploading a video. I'm talking about allowing the user to click "record" in a web application and having the video streamed to the server and stored when the user clicks "stop". ANSWER: This is possible with Adobe's Flash Media Server. Red5 is an open source alternative. The protocol they use for this is RTMP.
[ "java", "flash", "applet", "web-applications" ]
1
2
591
3
0
2008-09-10T14:37:10.010000
2008-09-10T14:41:21.420000
54,222
54,360
Accessing System Databases/Tables using LINQ to SQL?
Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005. When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ. I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL?
If you're in Server Explorer, you can make them visible this way: Create a connection to the server you want. Right-click the server and choose Change View > Object Type. You should now see System Tables and User Tables. You should see sysjobs there, and you can easily drag it onto a.dbml surface.
Accessing System Databases/Tables using LINQ to SQL? Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005. When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ. I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL?
TITLE: Accessing System Databases/Tables using LINQ to SQL? QUESTION: Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005. When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ. I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL? ANSWER: If you're in Server Explorer, you can make them visible this way: Create a connection to the server you want. Right-click the server and choose Change View > Object Type. You should now see System Tables and User Tables. You should see sysjobs there, and you can easily drag it onto a.dbml surface.
[ "c#", "sql-server", "linq", "linq-to-sql" ]
10
23
5,269
2
0
2008-09-10T14:37:24.907000
2008-09-10T15:16:50.600000
54,227
54,273
Asp.net path compaction
I have an asp.net url path which is being generated in a web form, and is coming out something like "/foo/bar/../bar/path.aspx", and is coming out in the generated html like this too. It should be shortened to "/foo/bar/path.aspx". Path.Combine didn't fix it. Is there a function to clean this path up?
You could create a helper class which wrapped the UriBuilder class in System.Net public static class UriHelper { public static string NormalizeRelativePath(string path) { UriBuilder _builder = new UriBuilder("http://localhost"); builder.Path = path; return builder.Uri.AbsolutePath; } } which could then be used like this: string url = "foo/bar/../bar/path.aspx"; Console.WriteLine(UriHelper.NormalizeRelativePath(url)); It is a bit hacky but it would work for the specific example you gave. EDIT: Updated to reflect Andrew's comments.
Asp.net path compaction I have an asp.net url path which is being generated in a web form, and is coming out something like "/foo/bar/../bar/path.aspx", and is coming out in the generated html like this too. It should be shortened to "/foo/bar/path.aspx". Path.Combine didn't fix it. Is there a function to clean this path up?
TITLE: Asp.net path compaction QUESTION: I have an asp.net url path which is being generated in a web form, and is coming out something like "/foo/bar/../bar/path.aspx", and is coming out in the generated html like this too. It should be shortened to "/foo/bar/path.aspx". Path.Combine didn't fix it. Is there a function to clean this path up? ANSWER: You could create a helper class which wrapped the UriBuilder class in System.Net public static class UriHelper { public static string NormalizeRelativePath(string path) { UriBuilder _builder = new UriBuilder("http://localhost"); builder.Path = path; return builder.Uri.AbsolutePath; } } which could then be used like this: string url = "foo/bar/../bar/path.aspx"; Console.WriteLine(UriHelper.NormalizeRelativePath(url)); It is a bit hacky but it would work for the specific example you gave. EDIT: Updated to reflect Andrew's comments.
[ "c#", "asp.net", "path" ]
2
4
468
3
0
2008-09-10T14:38:33.703000
2008-09-10T14:52:42.983000
54,230
56,537
CakePHP ACL Database Setup: ARO / ACO structure?
I'm struggling to implement ACL in CakePHP. After reading the documentation in the cake manual as well as several other tutorials, blog posts etc, I found Aran Johnson's excellent tutorial which has helped fill in many of the gaps. His examples seem to conflict with others I've seen though in a few places - specifically in the ARO tree structure he uses. In his examples his user groups are set up as a cascading tree, with the most general user type being at the top of the tree, and its children branching off for each more restricted access type. Elsewhere I've usually seen each user type as a child of the same generic user type. How do you set up your AROs and ACOs in CakePHP? Any and all tips appreciated!
CakePHP's built-in ACL system is really powerful, but poorly documented in terms of actual implementation details. A system that we've used with some success in a number of CakePHP-based projects is as follows. It's a modification of some group-level access systems that have been documented elsewhere. Our system's aims are to have a simple system where users are authorised on a group-level, but they can have specific additional rights on items that were created by them, or on a per-user basis. We wanted to avoid having to create a specific entry for each user (or, more specifically for each ARO) in the aros_acos table. We have a Users table, and a Roles table. Users user_id, user_name, role_id Roles id, role_name Create the ARO tree for each role (we usually have 4 roles - Unauthorised Guest (id 1), Authorised User (id 2), Site Moderator (id 3) and Administrator (id 4)): cake acl create aro / Role.1 cake acl create aro 1 Role.2... etc... After this, you have to use SQL or phpMyAdmin or similar to add aliases for all of these, as the cake command line tool doesn't do it. We use 'Role-{id}' and 'User-{id}' for all of ours. We then create a ROOT ACO - cake acl create aco / 'ROOT' and then create ACOs for all the controllers under this ROOT one: cake acl create aco 'ROOT' 'MyController'... etc... So far so normal. We add an additional field in the aros_acos table called _editown which we can use as an additional action in the ACL component's actionMap. CREATE TABLE IF NOT EXISTS `aros_acos` ( `id` int(11) NOT NULL auto_increment, `aro_id` int(11) default NULL, `aco_id` int(11) default NULL, `_create` int(11) NOT NULL default '0', `_read` int(11) NOT NULL default '0', `_update` int(11) NOT NULL default '0', `_delete` int(11) NOT NULL default '0', `_editown` int(11) NOT NULL default '0', PRIMARY KEY (`id`), KEY `acl` (`aro_id`,`aco_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; We can then setup the Auth component to use the 'crud' method, which validates the requested controller/action against an AclComponent::check(). In the app_controller we have something along the lines of: private function setupAuth() { if(isset($this->Auth)) {.... $this->Auth->authorize = 'crud'; $this->Auth->actionMap = array( 'index' => 'read', 'add' => 'create', 'edit' => 'update' 'editMine' => 'editown', 'view' => 'read'... etc... );... etc... } } Again, this is fairly standard CakePHP stuff. We then have a checkAccess method in the AppController that adds in the group-level stuff to check whether to check a group ARO or a user ARO for access: private function checkAccess() { if(!$user = $this->Auth->user()) { $role_alias = 'Role-1'; $user_alias = null; } else { $role_alias = 'Role-'. $user['User']['role_id']; $user_alias = 'User-'. $user['User']['id']; } // do we have an aro for this user? if($user_alias && ($user_aro = $this->User->Aro->findByAlias($user_alias))) { $aro_alias = $user_alias; } else { $aro_alias = $role_alias; } if ('editown' == $this->Auth->actionMap[$this->action]) { if($this->Acl->check($aro_alias, $this->name, 'editown') and $this->isMine()) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } else { // check this user-level aro for access if($this->Acl->check($aro_alias, $this->name, $this->Auth->actionMap[$this->action])) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } } The setupAuth() and checkAccess() methods are called in the AppController 's beforeFilter( ) callback. There's an isMine method in the AppControler too (see below) that just checks that the user_id of the requested item is the same as the currently authenticated user. I've left this out for clarity. That's really all there is to it. You can then allow / deny particular groups access to specific acos - cake acl grant 'Role-2' 'MyController' 'read' cake acl grant 'Role-2' 'MyController' 'editown' cake acl deny 'Role-2' 'MyController' 'update' cake acl deny 'Role-2' 'MyController' 'delete' I'm sure you get the picture. Anyway, this answer's way longer than I intended it to be, and it probably makes next to no sense, but I hope it's some help to you... -- edit -- As requested, here's an edited (purely for clarity - there's a lot of stuff in our boilerplate code that's meaningless here) isMine() method that we have in our AppController. I've removed a lot of error checking stuff too, but this is the essence of it: function isMine($model=null, $id=null, $usermodel='User', $foreignkey='user_id') { if(empty($model)) { // default model is first item in $this->uses array $model = $this->uses[0]; } if(empty($id)) { if(!empty($this->passedArgs['id'])) { $id = $this->passedArgs['id']; } elseif(!empty($this->passedArgs[0])) { $id = $this->passedArgs[0]; } } if(is_array($id)) { foreach($id as $i) { if(!$this->_isMine($model, $i, $usermodel, $foreignkey)) { return false; } } return true; } return $this->_isMine($model, $id, $usermodel, $foreignkey); } function _isMine($model, $id, $usermodel='User', $foreignkey='user_id') { $user = Configure::read('curr.loggedinuser'); // this is set in the UsersController on successful login if(isset($this->$model)) { $model = $this->$model; } else { $model = ClassRegistry::init($model); } //read model if(!($record = $model->read(null, $id))) { return false; } //get foreign key if($usermodel == $model->alias) { if($record[$model->alias][$model->primaryKey] == $user['User']['id']) { return true; } } elseif($record[$model->alias][$foreignkey] == $user['User']['id']) { return true; } return false; }
CakePHP ACL Database Setup: ARO / ACO structure? I'm struggling to implement ACL in CakePHP. After reading the documentation in the cake manual as well as several other tutorials, blog posts etc, I found Aran Johnson's excellent tutorial which has helped fill in many of the gaps. His examples seem to conflict with others I've seen though in a few places - specifically in the ARO tree structure he uses. In his examples his user groups are set up as a cascading tree, with the most general user type being at the top of the tree, and its children branching off for each more restricted access type. Elsewhere I've usually seen each user type as a child of the same generic user type. How do you set up your AROs and ACOs in CakePHP? Any and all tips appreciated!
TITLE: CakePHP ACL Database Setup: ARO / ACO structure? QUESTION: I'm struggling to implement ACL in CakePHP. After reading the documentation in the cake manual as well as several other tutorials, blog posts etc, I found Aran Johnson's excellent tutorial which has helped fill in many of the gaps. His examples seem to conflict with others I've seen though in a few places - specifically in the ARO tree structure he uses. In his examples his user groups are set up as a cascading tree, with the most general user type being at the top of the tree, and its children branching off for each more restricted access type. Elsewhere I've usually seen each user type as a child of the same generic user type. How do you set up your AROs and ACOs in CakePHP? Any and all tips appreciated! ANSWER: CakePHP's built-in ACL system is really powerful, but poorly documented in terms of actual implementation details. A system that we've used with some success in a number of CakePHP-based projects is as follows. It's a modification of some group-level access systems that have been documented elsewhere. Our system's aims are to have a simple system where users are authorised on a group-level, but they can have specific additional rights on items that were created by them, or on a per-user basis. We wanted to avoid having to create a specific entry for each user (or, more specifically for each ARO) in the aros_acos table. We have a Users table, and a Roles table. Users user_id, user_name, role_id Roles id, role_name Create the ARO tree for each role (we usually have 4 roles - Unauthorised Guest (id 1), Authorised User (id 2), Site Moderator (id 3) and Administrator (id 4)): cake acl create aro / Role.1 cake acl create aro 1 Role.2... etc... After this, you have to use SQL or phpMyAdmin or similar to add aliases for all of these, as the cake command line tool doesn't do it. We use 'Role-{id}' and 'User-{id}' for all of ours. We then create a ROOT ACO - cake acl create aco / 'ROOT' and then create ACOs for all the controllers under this ROOT one: cake acl create aco 'ROOT' 'MyController'... etc... So far so normal. We add an additional field in the aros_acos table called _editown which we can use as an additional action in the ACL component's actionMap. CREATE TABLE IF NOT EXISTS `aros_acos` ( `id` int(11) NOT NULL auto_increment, `aro_id` int(11) default NULL, `aco_id` int(11) default NULL, `_create` int(11) NOT NULL default '0', `_read` int(11) NOT NULL default '0', `_update` int(11) NOT NULL default '0', `_delete` int(11) NOT NULL default '0', `_editown` int(11) NOT NULL default '0', PRIMARY KEY (`id`), KEY `acl` (`aro_id`,`aco_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; We can then setup the Auth component to use the 'crud' method, which validates the requested controller/action against an AclComponent::check(). In the app_controller we have something along the lines of: private function setupAuth() { if(isset($this->Auth)) {.... $this->Auth->authorize = 'crud'; $this->Auth->actionMap = array( 'index' => 'read', 'add' => 'create', 'edit' => 'update' 'editMine' => 'editown', 'view' => 'read'... etc... );... etc... } } Again, this is fairly standard CakePHP stuff. We then have a checkAccess method in the AppController that adds in the group-level stuff to check whether to check a group ARO or a user ARO for access: private function checkAccess() { if(!$user = $this->Auth->user()) { $role_alias = 'Role-1'; $user_alias = null; } else { $role_alias = 'Role-'. $user['User']['role_id']; $user_alias = 'User-'. $user['User']['id']; } // do we have an aro for this user? if($user_alias && ($user_aro = $this->User->Aro->findByAlias($user_alias))) { $aro_alias = $user_alias; } else { $aro_alias = $role_alias; } if ('editown' == $this->Auth->actionMap[$this->action]) { if($this->Acl->check($aro_alias, $this->name, 'editown') and $this->isMine()) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } else { // check this user-level aro for access if($this->Acl->check($aro_alias, $this->name, $this->Auth->actionMap[$this->action])) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } } The setupAuth() and checkAccess() methods are called in the AppController 's beforeFilter( ) callback. There's an isMine method in the AppControler too (see below) that just checks that the user_id of the requested item is the same as the currently authenticated user. I've left this out for clarity. That's really all there is to it. You can then allow / deny particular groups access to specific acos - cake acl grant 'Role-2' 'MyController' 'read' cake acl grant 'Role-2' 'MyController' 'editown' cake acl deny 'Role-2' 'MyController' 'update' cake acl deny 'Role-2' 'MyController' 'delete' I'm sure you get the picture. Anyway, this answer's way longer than I intended it to be, and it probably makes next to no sense, but I hope it's some help to you... -- edit -- As requested, here's an edited (purely for clarity - there's a lot of stuff in our boilerplate code that's meaningless here) isMine() method that we have in our AppController. I've removed a lot of error checking stuff too, but this is the essence of it: function isMine($model=null, $id=null, $usermodel='User', $foreignkey='user_id') { if(empty($model)) { // default model is first item in $this->uses array $model = $this->uses[0]; } if(empty($id)) { if(!empty($this->passedArgs['id'])) { $id = $this->passedArgs['id']; } elseif(!empty($this->passedArgs[0])) { $id = $this->passedArgs[0]; } } if(is_array($id)) { foreach($id as $i) { if(!$this->_isMine($model, $i, $usermodel, $foreignkey)) { return false; } } return true; } return $this->_isMine($model, $id, $usermodel, $foreignkey); } function _isMine($model, $id, $usermodel='User', $foreignkey='user_id') { $user = Configure::read('curr.loggedinuser'); // this is set in the UsersController on successful login if(isset($this->$model)) { $model = $this->$model; } else { $model = ClassRegistry::init($model); } //read model if(!($record = $model->read(null, $id))) { return false; } //get foreign key if($usermodel == $model->alias) { if($record[$model->alias][$model->primaryKey] == $user['User']['id']) { return true; } } elseif($record[$model->alias][$foreignkey] == $user['User']['id']) { return true; } return false; }
[ "php", "cakephp", "permissions", "acl", "security-roles" ]
21
51
27,189
1
0
2008-09-10T14:39:38.147000
2008-09-11T13:36:18.010000
54,237
54,326
Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)?
I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then that other area would be highlighted). I can see how to do that with.asp or.aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it obvious I'd like to actually highlight the page being linked to.
In your css you need to define a.highlight {border:1px solid red;} or something similar Then using jQuery, $(document).ready ( function () { //Work as soon as the DOM is ready for parsing var id = location.hash.substr(1); //Get the word after the hash from the url if (id) $('#'+id).addClass('highlight'); // add class highlight to element whose id is the word after the hash }); To highlight the targets on mouse over also add: $("a[href^='#']").mouseover(function() { var id = $(this).attr('href').substr(1); $('#'+id).addClass('highlight'); }).mouseout(function() { var id = $(this).attr('href').substr(1); $('#'+id).removeClass('highlight'); });
Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)? I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then that other area would be highlighted). I can see how to do that with.asp or.aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it obvious I'd like to actually highlight the page being linked to.
TITLE: Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)? QUESTION: I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then that other area would be highlighted). I can see how to do that with.asp or.aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it obvious I'd like to actually highlight the page being linked to. ANSWER: In your css you need to define a.highlight {border:1px solid red;} or something similar Then using jQuery, $(document).ready ( function () { //Work as soon as the DOM is ready for parsing var id = location.hash.substr(1); //Get the word after the hash from the url if (id) $('#'+id).addClass('highlight'); // add class highlight to element whose id is the word after the hash }); To highlight the targets on mouse over also add: $("a[href^='#']").mouseover(function() { var id = $(this).attr('href').substr(1); $('#'+id).addClass('highlight'); }).mouseout(function() { var id = $(this).attr('href').substr(1); $('#'+id).removeClass('highlight'); });
[ "javascript", "jquery", "css" ]
4
10
4,438
4
0
2008-09-10T14:41:43.697000
2008-09-10T15:09:37.967000
54,255
54,434
In Vim is there a way to delete without putting text in the register?
Using Vim I often want to replace a block of code with a block that I just yanked. But when I delete the block of code that is to be replaced, that block itself goes into the register which erases the block I just yanked. So I've got in the habit of yanking, then inserting, then deleting what I didn't want, but with large blocks of code this gets messy trying to keep the inserted block and the block to delete separate. So what is the slickest and quickest way to replace text in Vim? is there a way to delete text without putting it into the register? is there a way to say e.g. "replace next word" or "replace up to next paragraph" or is the best way to somehow use the multi-register feature?
To delete something without saving it in a register, you can use the "black hole register": "_d Of course you could also use any of the other registers that don't hold anything you are interested in.
In Vim is there a way to delete without putting text in the register? Using Vim I often want to replace a block of code with a block that I just yanked. But when I delete the block of code that is to be replaced, that block itself goes into the register which erases the block I just yanked. So I've got in the habit of yanking, then inserting, then deleting what I didn't want, but with large blocks of code this gets messy trying to keep the inserted block and the block to delete separate. So what is the slickest and quickest way to replace text in Vim? is there a way to delete text without putting it into the register? is there a way to say e.g. "replace next word" or "replace up to next paragraph" or is the best way to somehow use the multi-register feature?
TITLE: In Vim is there a way to delete without putting text in the register? QUESTION: Using Vim I often want to replace a block of code with a block that I just yanked. But when I delete the block of code that is to be replaced, that block itself goes into the register which erases the block I just yanked. So I've got in the habit of yanking, then inserting, then deleting what I didn't want, but with large blocks of code this gets messy trying to keep the inserted block and the block to delete separate. So what is the slickest and quickest way to replace text in Vim? is there a way to delete text without putting it into the register? is there a way to say e.g. "replace next word" or "replace up to next paragraph" or is the best way to somehow use the multi-register feature? ANSWER: To delete something without saving it in a register, you can use the "black hole register": "_d Of course you could also use any of the other registers that don't hold anything you are interested in.
[ "vim", "replace", "vim-registers" ]
547
489
93,401
30
0
2008-09-10T14:46:35.277000
2008-09-10T15:36:42.830000
54,264
54,276
Tracking Refactorings in a Bug Database
Let's say you work someplace where every change to source code must be associated with a bug-report or feature-request, and there is no way to get that policy reformed. In such an environment, what is the best way to deal with code refactorings (that is, changes that improve the code but do not fix a bug or add a feature)? Write up a bug-report and associate the refactoring with it. Write up a feature-request and associate the refactoring with it. Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. Just don't do any refactoring. Other Note that all bug reports and feature descriptions will be visible to managers and customers.
I vote for the "sneak in refactorings" approach, which is, I believe, the way refactoring is meant to be done in the first place. It's probably a bad idea to refactor just for the sake of "cleaning up the code." This means that you're making changes for no real reason. Refactoring is, by definition, modifying the without the intent of fixing bugs or adding features. If you're following the KISS principle, any new feature is going to need at least some refactoring because you're not really thinking about how to make the most extensible system possible the first time around.
Tracking Refactorings in a Bug Database Let's say you work someplace where every change to source code must be associated with a bug-report or feature-request, and there is no way to get that policy reformed. In such an environment, what is the best way to deal with code refactorings (that is, changes that improve the code but do not fix a bug or add a feature)? Write up a bug-report and associate the refactoring with it. Write up a feature-request and associate the refactoring with it. Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. Just don't do any refactoring. Other Note that all bug reports and feature descriptions will be visible to managers and customers.
TITLE: Tracking Refactorings in a Bug Database QUESTION: Let's say you work someplace where every change to source code must be associated with a bug-report or feature-request, and there is no way to get that policy reformed. In such an environment, what is the best way to deal with code refactorings (that is, changes that improve the code but do not fix a bug or add a feature)? Write up a bug-report and associate the refactoring with it. Write up a feature-request and associate the refactoring with it. Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. Just don't do any refactoring. Other Note that all bug reports and feature descriptions will be visible to managers and customers. ANSWER: I vote for the "sneak in refactorings" approach, which is, I believe, the way refactoring is meant to be done in the first place. It's probably a bad idea to refactor just for the sake of "cleaning up the code." This means that you're making changes for no real reason. Refactoring is, by definition, modifying the without the intent of fixing bugs or adding features. If you're following the KISS principle, any new feature is going to need at least some refactoring because you're not really thinking about how to make the most extensible system possible the first time around.
[ "refactoring", "bug-tracking" ]
5
7
249
5
0
2008-09-10T14:49:45.237000
2008-09-10T14:53:43.480000
54,295
54,454
How to write java.util.Properties to XML with sorted keys?
I'd like to store a properties file as XML. Is there a way to sort the keys when doing this so that the generated XML file will be in alphabetical order? String propFile = "/path/to/file"; Properties props = new Properties(); /*set some properties here*/ try { FileOutputStream xmlStream = new FileOutputStream(propFile); /*this comes out unsorted*/ props.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); }
Here's a quick and dirty way to do it: String propFile = "/path/to/file"; Properties props = new Properties(); /* Set some properties here */ Properties tmp = new Properties() { @Override public Set keySet() { return Collections.unmodifiableSet(new TreeSet (super.keySet())); } }; tmp.putAll(props); try { FileOutputStream xmlStream = new FileOutputStream(propFile); /* This comes out SORTED! */ tmp.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } Here are the caveats: The tmp Properties (an anonymous subclass) doesn't fulfill the contract of Properties. For example, if you got its keySet and tried to remove an element from it, an exception would be raised. So, don't allow instances of this subclass to escape! In the snippet above, you are never passing it to another object or returning it to a caller who has a legitimate expectation that it fulfills the contract of Properties, so it is safe. The implementation of Properties.storeToXML could change, causing it to ignore the keySet method. For example, a future release, or OpenJDK, could use the keys() method of Hashtable instead of keySet. This is one of the reasons why classes should always document their "self-use" (Effective Java Item 15). However, in this case, the worst that would happen is that your output would revert to unsorted. Remember that the Properties storage methods ignore any "default" entries.
How to write java.util.Properties to XML with sorted keys? I'd like to store a properties file as XML. Is there a way to sort the keys when doing this so that the generated XML file will be in alphabetical order? String propFile = "/path/to/file"; Properties props = new Properties(); /*set some properties here*/ try { FileOutputStream xmlStream = new FileOutputStream(propFile); /*this comes out unsorted*/ props.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); }
TITLE: How to write java.util.Properties to XML with sorted keys? QUESTION: I'd like to store a properties file as XML. Is there a way to sort the keys when doing this so that the generated XML file will be in alphabetical order? String propFile = "/path/to/file"; Properties props = new Properties(); /*set some properties here*/ try { FileOutputStream xmlStream = new FileOutputStream(propFile); /*this comes out unsorted*/ props.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } ANSWER: Here's a quick and dirty way to do it: String propFile = "/path/to/file"; Properties props = new Properties(); /* Set some properties here */ Properties tmp = new Properties() { @Override public Set keySet() { return Collections.unmodifiableSet(new TreeSet (super.keySet())); } }; tmp.putAll(props); try { FileOutputStream xmlStream = new FileOutputStream(propFile); /* This comes out SORTED! */ tmp.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } Here are the caveats: The tmp Properties (an anonymous subclass) doesn't fulfill the contract of Properties. For example, if you got its keySet and tried to remove an element from it, an exception would be raised. So, don't allow instances of this subclass to escape! In the snippet above, you are never passing it to another object or returning it to a caller who has a legitimate expectation that it fulfills the contract of Properties, so it is safe. The implementation of Properties.storeToXML could change, causing it to ignore the keySet method. For example, a future release, or OpenJDK, could use the keys() method of Hashtable instead of keySet. This is one of the reasons why classes should always document their "self-use" (Effective Java Item 15). However, in this case, the worst that would happen is that your output would revert to unsorted. Remember that the Properties storage methods ignore any "default" entries.
[ "java", "xml" ]
13
28
17,177
9
0
2008-09-10T15:00:48.550000
2008-09-10T15:41:34.020000
54,318
55,673
Any tools to get code churn metrics for a Subversion repository?
I'm looking for any tools that can give you code churn metrics (graphs and charts would be even better) for a Subversion repository. One tool I know of is statsvn - a Java tool that creates some HTML reports and some code churn metrics. Statsvn reports the number of lines modified (churned) by user over time, some descriptive stats on LOC per file and folder/subfolder, etc. I would like to know code churn in order to get a better idea of the state of the project. Idea behind this inspired by the MS research: Use of Relative Code Churn Measures to Predict System Defect Density In a nutshell, the more that source code is churning (changing, whether adding new lines, deleting, changing,etc) the higher the probability that defects are being introduced into the system. The MS research paper says that the number of defects produced can be predicted based on a number of relative code churn measures. I wanted to know if there are any others that are maybe open source, extensible, etc.
If you are willing to go the commercial route check out FishEye from Atlassian (also see their demo site ). FishEye also supports adding plugins (though this does not appear to be very well supported at this time).
Any tools to get code churn metrics for a Subversion repository? I'm looking for any tools that can give you code churn metrics (graphs and charts would be even better) for a Subversion repository. One tool I know of is statsvn - a Java tool that creates some HTML reports and some code churn metrics. Statsvn reports the number of lines modified (churned) by user over time, some descriptive stats on LOC per file and folder/subfolder, etc. I would like to know code churn in order to get a better idea of the state of the project. Idea behind this inspired by the MS research: Use of Relative Code Churn Measures to Predict System Defect Density In a nutshell, the more that source code is churning (changing, whether adding new lines, deleting, changing,etc) the higher the probability that defects are being introduced into the system. The MS research paper says that the number of defects produced can be predicted based on a number of relative code churn measures. I wanted to know if there are any others that are maybe open source, extensible, etc.
TITLE: Any tools to get code churn metrics for a Subversion repository? QUESTION: I'm looking for any tools that can give you code churn metrics (graphs and charts would be even better) for a Subversion repository. One tool I know of is statsvn - a Java tool that creates some HTML reports and some code churn metrics. Statsvn reports the number of lines modified (churned) by user over time, some descriptive stats on LOC per file and folder/subfolder, etc. I would like to know code churn in order to get a better idea of the state of the project. Idea behind this inspired by the MS research: Use of Relative Code Churn Measures to Predict System Defect Density In a nutshell, the more that source code is churning (changing, whether adding new lines, deleting, changing,etc) the higher the probability that defects are being introduced into the system. The MS research paper says that the number of defects produced can be predicted based on a number of relative code churn measures. I wanted to know if there are any others that are maybe open source, extensible, etc. ANSWER: If you are willing to go the commercial route check out FishEye from Atlassian (also see their demo site ). FishEye also supports adding plugins (though this does not appear to be very well supported at this time).
[ "svn", "project-management" ]
29
5
12,515
7
0
2008-09-10T15:07:18.940000
2008-09-11T02:05:13.257000
54,334
54,343
How do I concatenate text in a query in sql server?
The following SQL: SELECT notes + 'SomeText' FROM NotesTable a Give the error: The data types nvarchar and text are incompatible in the add operator.
The only way would be to convert your text field into an nvarchar field. Select Cast(notes as nvarchar(4000)) + 'SomeText' From NotesTable a Otherwise, I suggest doing the concatenation in your application.
How do I concatenate text in a query in sql server? The following SQL: SELECT notes + 'SomeText' FROM NotesTable a Give the error: The data types nvarchar and text are incompatible in the add operator.
TITLE: How do I concatenate text in a query in sql server? QUESTION: The following SQL: SELECT notes + 'SomeText' FROM NotesTable a Give the error: The data types nvarchar and text are incompatible in the add operator. ANSWER: The only way would be to convert your text field into an nvarchar field. Select Cast(notes as nvarchar(4000)) + 'SomeText' From NotesTable a Otherwise, I suggest doing the concatenation in your application.
[ "sql", "sql-server" ]
81
82
294,247
6
0
2008-09-10T15:11:42.983000
2008-09-10T15:13:16.070000
54,337
54,395
How can you find out where the style for a ASP .Net web page element came from?
I have a quandary. My web application (C#,.Net 3.0, etc) has Themes, CSS sheets and, of course, inline style definitions. Now that's alot of chefs adding stuff to the soup. All of this results, not surprisingly, in my pages having bizarre styling on occasion. I am sure that all these styles are applied in a hierarchical method (although I am not sure of that order). The issue is that each style is applied as a "transparent" layer which just masks what it is applying. This is, I feel, a good idea as you can specifiy styles for the whole and then one-off them as needed. Unfortunately I can't tell from which layer the style actually came from. I could solve this issue by explicitly expressing the style at all layers but that gets bulky and hard to manage and the page(s) works 80% of the time. I just need to figure out where that squirrelly 20% came from.
IMHO, Firebug is going to be your best bet. It will tell you which file the style came from and you can click on the filename to be transported instantly to the relevant line in the file. Note: You can hit ctrl + shift + C on any page to select and inspect an element with the mouse.
How can you find out where the style for a ASP .Net web page element came from? I have a quandary. My web application (C#,.Net 3.0, etc) has Themes, CSS sheets and, of course, inline style definitions. Now that's alot of chefs adding stuff to the soup. All of this results, not surprisingly, in my pages having bizarre styling on occasion. I am sure that all these styles are applied in a hierarchical method (although I am not sure of that order). The issue is that each style is applied as a "transparent" layer which just masks what it is applying. This is, I feel, a good idea as you can specifiy styles for the whole and then one-off them as needed. Unfortunately I can't tell from which layer the style actually came from. I could solve this issue by explicitly expressing the style at all layers but that gets bulky and hard to manage and the page(s) works 80% of the time. I just need to figure out where that squirrelly 20% came from.
TITLE: How can you find out where the style for a ASP .Net web page element came from? QUESTION: I have a quandary. My web application (C#,.Net 3.0, etc) has Themes, CSS sheets and, of course, inline style definitions. Now that's alot of chefs adding stuff to the soup. All of this results, not surprisingly, in my pages having bizarre styling on occasion. I am sure that all these styles are applied in a hierarchical method (although I am not sure of that order). The issue is that each style is applied as a "transparent" layer which just masks what it is applying. This is, I feel, a good idea as you can specifiy styles for the whole and then one-off them as needed. Unfortunately I can't tell from which layer the style actually came from. I could solve this issue by explicitly expressing the style at all layers but that gets bulky and hard to manage and the page(s) works 80% of the time. I just need to figure out where that squirrelly 20% came from. ANSWER: IMHO, Firebug is going to be your best bet. It will tell you which file the style came from and you can click on the filename to be transported instantly to the relevant line in the file. Note: You can hit ctrl + shift + C on any page to select and inspect an element with the mouse.
[ "c#", "asp.net", "css", "themes" ]
2
5
796
5
0
2008-09-10T15:12:19.770000
2008-09-10T15:25:54.433000
54,364
54,473
Using ActiveDirectoryMembershipProvider with two domain controllers
We have an ASP.NET application running at a customer site that uses ActiveDirectory for user login via the ActiveDirectoryMembershipProvider. Their primary domain controller that we were pointing to went down this morning, and in getting everything set back up the client was wondering if we could have a redundant connection to two domain controllers; i.e. specifying a primary and a backup AD server. A Google search proved fruitless - does anyone know if this can be done?
If ActiveDirectory couldn't handle multiple domain controllers then it wouldn't be a very good technology. You just need to make sure in your Membership configuration you are pointing to the 'Domain' rather than the 'Server' and then add two or more controllers to your domain. Generally if you are referring to the domain as "LDAP://server/DC=domain,DC=com" then you should be able to remove the "server" part and refer simply to "LDAP://DC=domain,DC=com" The following code project gives a long list of things you can do in Active Directory from C#: http://www.codeproject.com/KB/system/everythingInAD.aspx#7
Using ActiveDirectoryMembershipProvider with two domain controllers We have an ASP.NET application running at a customer site that uses ActiveDirectory for user login via the ActiveDirectoryMembershipProvider. Their primary domain controller that we were pointing to went down this morning, and in getting everything set back up the client was wondering if we could have a redundant connection to two domain controllers; i.e. specifying a primary and a backup AD server. A Google search proved fruitless - does anyone know if this can be done?
TITLE: Using ActiveDirectoryMembershipProvider with two domain controllers QUESTION: We have an ASP.NET application running at a customer site that uses ActiveDirectory for user login via the ActiveDirectoryMembershipProvider. Their primary domain controller that we were pointing to went down this morning, and in getting everything set back up the client was wondering if we could have a redundant connection to two domain controllers; i.e. specifying a primary and a backup AD server. A Google search proved fruitless - does anyone know if this can be done? ANSWER: If ActiveDirectory couldn't handle multiple domain controllers then it wouldn't be a very good technology. You just need to make sure in your Membership configuration you are pointing to the 'Domain' rather than the 'Server' and then add two or more controllers to your domain. Generally if you are referring to the domain as "LDAP://server/DC=domain,DC=com" then you should be able to remove the "server" part and refer simply to "LDAP://DC=domain,DC=com" The following code project gives a long list of things you can do in Active Directory from C#: http://www.codeproject.com/KB/system/everythingInAD.aspx#7
[ "asp.net", "active-directory", "directoryservices" ]
3
2
3,390
2
0
2008-09-10T15:17:06.623000
2008-09-10T15:46:52.197000
54,365
621,786
Shell one liner to prepend to a file
This is probably a complex solution. I am looking for a simple operator like ">>", but for prepending. I am afraid it does not exist. I'll have to do something like mv myfile tmp cat myheader tmp > myfile Anything smarter?
The hack below was a quick off-the-cuff answer which worked and received lots of upvotes. Then, as the question became more popular and more time passed, people started reporting that it sorta worked but weird things could happen, or it just didn't work at all. Such fun. I recommend the 'sponge' solution posted by user222 as Sponge is part of 'moreutils' and probably on your system by default. (echo 'foo' && cat yourfile) | sponge yourfile The solution below exploits the exact implementation of file descriptors on your system and, because implementation varies significantly between nixes, it's success is entirely system dependent, definitively non-portable, and should not be relied upon for anything even vaguely important. Sponge uses the /tmp filesystem but condenses the task to a single command. Now, with all that out of the way the original answer was: Creating another file descriptor for the file ( exec 3<> yourfile ) thence writing to that ( >&3 ) seems to overcome the read/write on same file dilemma. Works for me on 600K files with awk. However trying the same trick using 'cat' fails. Passing the prependage as a variable to awk ( -v TEXT="$text" ) overcomes the literal quotes problem which prevents doing this trick with 'sed'. #!/bin/bash text="Hello world What's up?" exec 3<> yourfile && awk -v TEXT="$text" 'BEGIN {print TEXT}{print}' yourfile >&3
Shell one liner to prepend to a file This is probably a complex solution. I am looking for a simple operator like ">>", but for prepending. I am afraid it does not exist. I'll have to do something like mv myfile tmp cat myheader tmp > myfile Anything smarter?
TITLE: Shell one liner to prepend to a file QUESTION: This is probably a complex solution. I am looking for a simple operator like ">>", but for prepending. I am afraid it does not exist. I'll have to do something like mv myfile tmp cat myheader tmp > myfile Anything smarter? ANSWER: The hack below was a quick off-the-cuff answer which worked and received lots of upvotes. Then, as the question became more popular and more time passed, people started reporting that it sorta worked but weird things could happen, or it just didn't work at all. Such fun. I recommend the 'sponge' solution posted by user222 as Sponge is part of 'moreutils' and probably on your system by default. (echo 'foo' && cat yourfile) | sponge yourfile The solution below exploits the exact implementation of file descriptors on your system and, because implementation varies significantly between nixes, it's success is entirely system dependent, definitively non-portable, and should not be relied upon for anything even vaguely important. Sponge uses the /tmp filesystem but condenses the task to a single command. Now, with all that out of the way the original answer was: Creating another file descriptor for the file ( exec 3<> yourfile ) thence writing to that ( >&3 ) seems to overcome the read/write on same file dilemma. Works for me on 600K files with awk. However trying the same trick using 'cat' fails. Passing the prependage as a variable to awk ( -v TEXT="$text" ) overcomes the literal quotes problem which prevents doing this trick with 'sed'. #!/bin/bash text="Hello world What's up?" exec 3<> yourfile && awk -v TEXT="$text" 'BEGIN {print TEXT}{print}' yourfile >&3
[ "shell", "unix" ]
142
31
65,714
35
0
2008-09-10T15:17:09.507000
2009-03-07T12:43:49.330000
54,387
54,429
Exposing .net methods as Excel functions?
I have a set of calculation methods sitting in a.Net DLL. I would like to make those methods available to Excel (2003+) users so they can use them in their spreadsheets. For example, my.net method: public double CalculateSomethingReallyComplex(double a, double b) {...} I would like enable them to call this method just by typing a formula in a random cell: =CalculateSomethingReallyComplex(A1, B1) What would be the best way to accomplish this?
There are two methods - you can used Visual Studio Tools for Office (VSTO): http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx or you can use COM: http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx I'm not sure if the VSTO method would work in older versions of Excel, but the COM method should work fine.
Exposing .net methods as Excel functions? I have a set of calculation methods sitting in a.Net DLL. I would like to make those methods available to Excel (2003+) users so they can use them in their spreadsheets. For example, my.net method: public double CalculateSomethingReallyComplex(double a, double b) {...} I would like enable them to call this method just by typing a formula in a random cell: =CalculateSomethingReallyComplex(A1, B1) What would be the best way to accomplish this?
TITLE: Exposing .net methods as Excel functions? QUESTION: I have a set of calculation methods sitting in a.Net DLL. I would like to make those methods available to Excel (2003+) users so they can use them in their spreadsheets. For example, my.net method: public double CalculateSomethingReallyComplex(double a, double b) {...} I would like enable them to call this method just by typing a formula in a random cell: =CalculateSomethingReallyComplex(A1, B1) What would be the best way to accomplish this? ANSWER: There are two methods - you can used Visual Studio Tools for Office (VSTO): http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx or you can use COM: http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx I'm not sure if the VSTO method would work in older versions of Excel, but the COM method should work fine.
[ ".net", "excel", "interop", "vsto" ]
6
3
2,461
2
0
2008-09-10T15:23:26.420000
2008-09-10T15:36:08.357000
54,401
54,527
What is the best strategy for code chunks and macros in vim?
As I develop more with vim, I find myself wanting to copy in blocks of useful code, similar to "templates" in Eclipse. I was thinking of making a separate file for each code chunk and just reading them in with:r code-fornext but that just seems kind of primitive. Googling around I find vim macros mentioned and something about "maps" but nothing that seems straightforward. What I am looking for are e.g. something like Eclipse's "Templates" so I pop in a code chunk with the cursor sitting in the middle of it. Or JEdit's "Macros" which I can record doing complicated deletes and renaming on one line, then I can play it again on 10 other lines so it does the same to them. Does vim have anything like these two functionalities?
To record macros in Vim, in the command mode, hit the q key and another key you want to assign the macro to. For quick throw away macros I usually just hit qq and assign the macro to the q key. Once you are in recording mode, run through your key strokes. When you are done make sure you are back in command mode and hit q again to stop recording. Then to replay the macro manually, you can type @q. To replay the previously run macro you can type @@ or to run it 10 times you could type 10@q or 20@q, etc.. In summary: +----------------------------------+-------------------------------------+ | start recording a macro | qX (X = key to assign macro to) | +----------------------------------+-------------------------------------+ | stop recording a macro | q | +----------------------------------+-------------------------------------+ | playback macro | @X (X = key macro was assigned to) | +----------------------------------+-------------------------------------+ | replay previously played macro | @@ | +----------------------------------+-------------------------------------+ In regards to code chunks, I have found and started using a Vim plug-in called snipMate, which mimics TextMate's snippets feature. You can get the plug-in here: http://www.vim.org/scripts/script.php?script_id=2540 And a short article on using snipMate (along with a short screencast showing it in use): http://www.catonmat.net/blog/vim-plugins-snipmate-vim/ Hope you find this helpful!
What is the best strategy for code chunks and macros in vim? As I develop more with vim, I find myself wanting to copy in blocks of useful code, similar to "templates" in Eclipse. I was thinking of making a separate file for each code chunk and just reading them in with:r code-fornext but that just seems kind of primitive. Googling around I find vim macros mentioned and something about "maps" but nothing that seems straightforward. What I am looking for are e.g. something like Eclipse's "Templates" so I pop in a code chunk with the cursor sitting in the middle of it. Or JEdit's "Macros" which I can record doing complicated deletes and renaming on one line, then I can play it again on 10 other lines so it does the same to them. Does vim have anything like these two functionalities?
TITLE: What is the best strategy for code chunks and macros in vim? QUESTION: As I develop more with vim, I find myself wanting to copy in blocks of useful code, similar to "templates" in Eclipse. I was thinking of making a separate file for each code chunk and just reading them in with:r code-fornext but that just seems kind of primitive. Googling around I find vim macros mentioned and something about "maps" but nothing that seems straightforward. What I am looking for are e.g. something like Eclipse's "Templates" so I pop in a code chunk with the cursor sitting in the middle of it. Or JEdit's "Macros" which I can record doing complicated deletes and renaming on one line, then I can play it again on 10 other lines so it does the same to them. Does vim have anything like these two functionalities? ANSWER: To record macros in Vim, in the command mode, hit the q key and another key you want to assign the macro to. For quick throw away macros I usually just hit qq and assign the macro to the q key. Once you are in recording mode, run through your key strokes. When you are done make sure you are back in command mode and hit q again to stop recording. Then to replay the macro manually, you can type @q. To replay the previously run macro you can type @@ or to run it 10 times you could type 10@q or 20@q, etc.. In summary: +----------------------------------+-------------------------------------+ | start recording a macro | qX (X = key to assign macro to) | +----------------------------------+-------------------------------------+ | stop recording a macro | q | +----------------------------------+-------------------------------------+ | playback macro | @X (X = key macro was assigned to) | +----------------------------------+-------------------------------------+ | replay previously played macro | @@ | +----------------------------------+-------------------------------------+ In regards to code chunks, I have found and started using a Vim plug-in called snipMate, which mimics TextMate's snippets feature. You can get the plug-in here: http://www.vim.org/scripts/script.php?script_id=2540 And a short article on using snipMate (along with a short screencast showing it in use): http://www.catonmat.net/blog/vim-plugins-snipmate-vim/ Hope you find this helpful!
[ "vim", "vi" ]
5
8
1,035
2
0
2008-09-10T15:27:50.480000
2008-09-10T16:03:36.230000
54,418
54,430
How do I (or can I) SELECT DISTINCT on multiple columns?
I need to retrieve all rows from a table where 2 columns combined are all different. So I want all the sales that do not have any other sales that happened on the same day for the same price. The sales that are unique based on day and price will get updated to an active status. So I'm thinking: UPDATE sales SET status = 'ACTIVE' WHERE id IN (SELECT DISTINCT (saleprice, saledate), id, count(id) FROM sales HAVING count = 1) But my brain hurts going any farther than that.
SELECT DISTINCT a,b,c FROM t is roughly equivalent to: SELECT a,b,c FROM t GROUP BY a,b,c It's a good idea to get used to the GROUP BY syntax, as it's more powerful. For your query, I'd do it like this: UPDATE sales SET status='ACTIVE' WHERE id IN ( SELECT id FROM sales S INNER JOIN ( SELECT saleprice, saledate FROM sales GROUP BY saleprice, saledate HAVING COUNT(*) = 1 ) T ON S.saleprice=T.saleprice AND s.saledate=T.saledate )
How do I (or can I) SELECT DISTINCT on multiple columns? I need to retrieve all rows from a table where 2 columns combined are all different. So I want all the sales that do not have any other sales that happened on the same day for the same price. The sales that are unique based on day and price will get updated to an active status. So I'm thinking: UPDATE sales SET status = 'ACTIVE' WHERE id IN (SELECT DISTINCT (saleprice, saledate), id, count(id) FROM sales HAVING count = 1) But my brain hurts going any farther than that.
TITLE: How do I (or can I) SELECT DISTINCT on multiple columns? QUESTION: I need to retrieve all rows from a table where 2 columns combined are all different. So I want all the sales that do not have any other sales that happened on the same day for the same price. The sales that are unique based on day and price will get updated to an active status. So I'm thinking: UPDATE sales SET status = 'ACTIVE' WHERE id IN (SELECT DISTINCT (saleprice, saledate), id, count(id) FROM sales HAVING count = 1) But my brain hurts going any farther than that. ANSWER: SELECT DISTINCT a,b,c FROM t is roughly equivalent to: SELECT a,b,c FROM t GROUP BY a,b,c It's a good idea to get used to the GROUP BY syntax, as it's more powerful. For your query, I'd do it like this: UPDATE sales SET status='ACTIVE' WHERE id IN ( SELECT id FROM sales S INNER JOIN ( SELECT saleprice, saledate FROM sales GROUP BY saleprice, saledate HAVING COUNT(*) = 1 ) T ON S.saleprice=T.saleprice AND s.saledate=T.saledate )
[ "sql", "postgresql", "sql-update", "duplicates", "distinct" ]
583
624
1,277,671
5
0
2008-09-10T15:33:10.890000
2008-09-10T15:36:09.417000
54,419
90,870
How to host 2 WCF services in 1 Windows Service?
I have a WCF application that has two Services that I am trying to host in a single Windows Service using net.tcp. I can run either of the services just fine, but as soon as I try to put them both in the Windows Service only the first one loads up. I have determined that the second services ctor is being called but the OnStart never fires. This tells me that WCF is finding something wrong with loading up that second service. Using net.tcp I know I need to turn on port sharing and start the port sharing service on the server. This all seems to be working properly. I have tried putting the services on different tcp ports and still no success. My service installer class looks like this: [RunInstaller(true)] public class ProjectInstaller: Installer { private ServiceProcessInstaller _process; private ServiceInstaller _serviceAdmin; private ServiceInstaller _servicePrint; public ProjectInstaller() { _process = new ServiceProcessInstaller(); _process.Account = ServiceAccount.LocalSystem; _servicePrint = new ServiceInstaller(); _servicePrint.ServiceName = "PrintingService"; _servicePrint.StartType = ServiceStartMode.Automatic; _serviceAdmin = new ServiceInstaller(); _serviceAdmin.ServiceName = "PrintingAdminService"; _serviceAdmin.StartType = ServiceStartMode.Automatic; Installers.AddRange(new Installer[] { _process, _servicePrint, _serviceAdmin }); } } and both services looking very similar class PrintService: ServiceBase { public ServiceHost _host = null; public PrintService() { ServiceName = "PCTSPrintingService"; CanStop = true; AutoLog = true; } protected override void OnStart(string[] args) { if (_host!= null) _host.Close(); _host = new ServiceHost(typeof(Printing.ServiceImplementation.PrintingService)); _host.Faulted += host_Faulted; _host.Open(); } }
Base your service on this MSDN article and create two service hosts. But instead of actually calling each service host directly, you can break it out to as many classes as you want which defines each service you want to run: internal class MyWCFService1 { internal static System.ServiceModel.ServiceHost serviceHost = null; internal static void StartService() { if (serviceHost!= null) { serviceHost.Close(); } // Instantiate new ServiceHost. serviceHost = new System.ServiceModel.ServiceHost(typeof(MyService1)); // Open myServiceHost. serviceHost.Open(); } internal static void StopService() { if (serviceHost!= null) { serviceHost.Close(); serviceHost = null; } } }; In the body of the windows service host, call the different classes: // Start the Windows service. protected override void OnStart( string[] args ) { // Call all the set up WCF services... MyWCFService1.StartService(); //MyWCFService2.StartService(); //MyWCFService3.StartService(); } Then you can add as many WCF services as you like to one windows service host. REMEBER to call the stop methods as well....
How to host 2 WCF services in 1 Windows Service? I have a WCF application that has two Services that I am trying to host in a single Windows Service using net.tcp. I can run either of the services just fine, but as soon as I try to put them both in the Windows Service only the first one loads up. I have determined that the second services ctor is being called but the OnStart never fires. This tells me that WCF is finding something wrong with loading up that second service. Using net.tcp I know I need to turn on port sharing and start the port sharing service on the server. This all seems to be working properly. I have tried putting the services on different tcp ports and still no success. My service installer class looks like this: [RunInstaller(true)] public class ProjectInstaller: Installer { private ServiceProcessInstaller _process; private ServiceInstaller _serviceAdmin; private ServiceInstaller _servicePrint; public ProjectInstaller() { _process = new ServiceProcessInstaller(); _process.Account = ServiceAccount.LocalSystem; _servicePrint = new ServiceInstaller(); _servicePrint.ServiceName = "PrintingService"; _servicePrint.StartType = ServiceStartMode.Automatic; _serviceAdmin = new ServiceInstaller(); _serviceAdmin.ServiceName = "PrintingAdminService"; _serviceAdmin.StartType = ServiceStartMode.Automatic; Installers.AddRange(new Installer[] { _process, _servicePrint, _serviceAdmin }); } } and both services looking very similar class PrintService: ServiceBase { public ServiceHost _host = null; public PrintService() { ServiceName = "PCTSPrintingService"; CanStop = true; AutoLog = true; } protected override void OnStart(string[] args) { if (_host!= null) _host.Close(); _host = new ServiceHost(typeof(Printing.ServiceImplementation.PrintingService)); _host.Faulted += host_Faulted; _host.Open(); } }
TITLE: How to host 2 WCF services in 1 Windows Service? QUESTION: I have a WCF application that has two Services that I am trying to host in a single Windows Service using net.tcp. I can run either of the services just fine, but as soon as I try to put them both in the Windows Service only the first one loads up. I have determined that the second services ctor is being called but the OnStart never fires. This tells me that WCF is finding something wrong with loading up that second service. Using net.tcp I know I need to turn on port sharing and start the port sharing service on the server. This all seems to be working properly. I have tried putting the services on different tcp ports and still no success. My service installer class looks like this: [RunInstaller(true)] public class ProjectInstaller: Installer { private ServiceProcessInstaller _process; private ServiceInstaller _serviceAdmin; private ServiceInstaller _servicePrint; public ProjectInstaller() { _process = new ServiceProcessInstaller(); _process.Account = ServiceAccount.LocalSystem; _servicePrint = new ServiceInstaller(); _servicePrint.ServiceName = "PrintingService"; _servicePrint.StartType = ServiceStartMode.Automatic; _serviceAdmin = new ServiceInstaller(); _serviceAdmin.ServiceName = "PrintingAdminService"; _serviceAdmin.StartType = ServiceStartMode.Automatic; Installers.AddRange(new Installer[] { _process, _servicePrint, _serviceAdmin }); } } and both services looking very similar class PrintService: ServiceBase { public ServiceHost _host = null; public PrintService() { ServiceName = "PCTSPrintingService"; CanStop = true; AutoLog = true; } protected override void OnStart(string[] args) { if (_host!= null) _host.Close(); _host = new ServiceHost(typeof(Printing.ServiceImplementation.PrintingService)); _host.Faulted += host_Faulted; _host.Open(); } } ANSWER: Base your service on this MSDN article and create two service hosts. But instead of actually calling each service host directly, you can break it out to as many classes as you want which defines each service you want to run: internal class MyWCFService1 { internal static System.ServiceModel.ServiceHost serviceHost = null; internal static void StartService() { if (serviceHost!= null) { serviceHost.Close(); } // Instantiate new ServiceHost. serviceHost = new System.ServiceModel.ServiceHost(typeof(MyService1)); // Open myServiceHost. serviceHost.Open(); } internal static void StopService() { if (serviceHost!= null) { serviceHost.Close(); serviceHost = null; } } }; In the body of the windows service host, call the different classes: // Start the Windows service. protected override void OnStart( string[] args ) { // Call all the set up WCF services... MyWCFService1.StartService(); //MyWCFService2.StartService(); //MyWCFService3.StartService(); } Then you can add as many WCF services as you like to one windows service host. REMEBER to call the stop methods as well....
[ "wcf", "windows-services" ]
8
11
8,833
4
0
2008-09-10T15:33:23.103000
2008-09-18T08:13:37.370000
54,421
54,457
What is the design pattern for processing command line arguments
If you are writing a program that is executable from the command line, you often want to offer the user several options or flags, along with possibly more than one argument. I have stumbled my way through this many times, but is there some sort of design pattern for looping through args and calling the appropriate handler functions? Consider: myprogram -f filename -d directory -r regex How do you organize the handler functions after you retrieve the arguments using whatever built-ins for your language? (language-specific answers welcomed, if that helps you articulate an answer)
I don't know of any documented "patterns" for processing. I believe one of the oldest libraries/APIs for handling arguments is getopt. Googling "getopt" shows lots of man pages and links to implementations. Generally, I have a preferences or settings service in my application that the argument processor knows how to communicate with. Arguments are then translated into something in this service that the application than then query. This could be as simple as a dictionary of settings (like a string setting named "filename").
What is the design pattern for processing command line arguments If you are writing a program that is executable from the command line, you often want to offer the user several options or flags, along with possibly more than one argument. I have stumbled my way through this many times, but is there some sort of design pattern for looping through args and calling the appropriate handler functions? Consider: myprogram -f filename -d directory -r regex How do you organize the handler functions after you retrieve the arguments using whatever built-ins for your language? (language-specific answers welcomed, if that helps you articulate an answer)
TITLE: What is the design pattern for processing command line arguments QUESTION: If you are writing a program that is executable from the command line, you often want to offer the user several options or flags, along with possibly more than one argument. I have stumbled my way through this many times, but is there some sort of design pattern for looping through args and calling the appropriate handler functions? Consider: myprogram -f filename -d directory -r regex How do you organize the handler functions after you retrieve the arguments using whatever built-ins for your language? (language-specific answers welcomed, if that helps you articulate an answer) ANSWER: I don't know of any documented "patterns" for processing. I believe one of the oldest libraries/APIs for handling arguments is getopt. Googling "getopt" shows lots of man pages and links to implementations. Generally, I have a preferences or settings service in my application that the argument processor knows how to communicate with. Arguments are then translated into something in this service that the application than then query. This could be as simple as a dictionary of settings (like a string setting named "filename").
[ "design-patterns", "command-line", "arguments", "language-agnostic" ]
51
14
18,127
15
0
2008-09-10T15:33:49.890000
2008-09-10T15:41:59.793000
54,426
54,470
What's cleanest, shortest Javascript to submit a URL the user is at to another process via URL?
Like the Delicious submission bookmark-let, I'd like to have some standard JavaScript I can use to submit any visited URL to a 3rd party site when that's possible by URL. Suggestions? For example, I've been using javascript:void(location.href="http://www.yacktrack.com/home?query="+encodeURI(location.href)) so far but wonder if there's something more sophisticated I could use or better practice.
Do you want something exactly like the Delicious bookmarklet (as in, something the user actively clicks on to submit the URL)? If so, you could probably just copy their code and replace the target URL: javascript:(function(){ location.href='http://example.com/your-script.php?url='+ encodeURIComponent(window.location.href)+ '&title='+encodeURIComponent(document.title) })() You may need to change the query string names, etc., to match what your script expects. If you want to track a user through your website automatically, this probably won't be possible. You'd need to request the URL with AJAX, but the web browser won't allow Javascript to make a request outside of the originating domain. Maybe it's possible with iframe trickery. Edit: John beat me to it.
What's cleanest, shortest Javascript to submit a URL the user is at to another process via URL? Like the Delicious submission bookmark-let, I'd like to have some standard JavaScript I can use to submit any visited URL to a 3rd party site when that's possible by URL. Suggestions? For example, I've been using javascript:void(location.href="http://www.yacktrack.com/home?query="+encodeURI(location.href)) so far but wonder if there's something more sophisticated I could use or better practice.
TITLE: What's cleanest, shortest Javascript to submit a URL the user is at to another process via URL? QUESTION: Like the Delicious submission bookmark-let, I'd like to have some standard JavaScript I can use to submit any visited URL to a 3rd party site when that's possible by URL. Suggestions? For example, I've been using javascript:void(location.href="http://www.yacktrack.com/home?query="+encodeURI(location.href)) so far but wonder if there's something more sophisticated I could use or better practice. ANSWER: Do you want something exactly like the Delicious bookmarklet (as in, something the user actively clicks on to submit the URL)? If so, you could probably just copy their code and replace the target URL: javascript:(function(){ location.href='http://example.com/your-script.php?url='+ encodeURIComponent(window.location.href)+ '&title='+encodeURIComponent(document.title) })() You may need to change the query string names, etc., to match what your script expects. If you want to track a user through your website automatically, this probably won't be possible. You'd need to request the URL with AJAX, but the web browser won't allow Javascript to make a request outside of the originating domain. Maybe it's possible with iframe trickery. Edit: John beat me to it.
[ "javascript", "url", "submission" ]
2
2
11,140
3
0
2008-09-10T15:35:31.447000
2008-09-10T15:45:59.003000
54,475
54,486
Aggressive JavaScript caching
I've run into a problem where I make changes to a few JavaScript files that are referenced in an HTML file, but the browser doesn't see the changes. It holds onto the copy cached in the browser, even though the web server has a newer version. Not until I force the browser to clear the cache do I see the changes. Is this a web-server configuration? Do I need to set my JavaScript files to never cache? I've seen some interesting techniques in the Google Web Toolkit where they actually create a new JavaScript file name any time an update is made. I believe this is to prevent proxies and browsers from keeping old versions of the JavaScript files with the same names. Is there a list of best practices somewhere?
We append a product build number to the end of all Javascript (and CSS etc.) like so:
Aggressive JavaScript caching I've run into a problem where I make changes to a few JavaScript files that are referenced in an HTML file, but the browser doesn't see the changes. It holds onto the copy cached in the browser, even though the web server has a newer version. Not until I force the browser to clear the cache do I see the changes. Is this a web-server configuration? Do I need to set my JavaScript files to never cache? I've seen some interesting techniques in the Google Web Toolkit where they actually create a new JavaScript file name any time an update is made. I believe this is to prevent proxies and browsers from keeping old versions of the JavaScript files with the same names. Is there a list of best practices somewhere?
TITLE: Aggressive JavaScript caching QUESTION: I've run into a problem where I make changes to a few JavaScript files that are referenced in an HTML file, but the browser doesn't see the changes. It holds onto the copy cached in the browser, even though the web server has a newer version. Not until I force the browser to clear the cache do I see the changes. Is this a web-server configuration? Do I need to set my JavaScript files to never cache? I've seen some interesting techniques in the Google Web Toolkit where they actually create a new JavaScript file name any time an update is made. I believe this is to prevent proxies and browsers from keeping old versions of the JavaScript files with the same names. Is there a list of best practices somewhere? ANSWER: We append a product build number to the end of all Javascript (and CSS etc.) like so:
[ "javascript", "caching" ]
21
30
19,518
10
0
2008-09-10T15:46:58.877000
2008-09-10T15:49:34.250000
54,482
54,496
How do I list user defined types in a SQL Server database?
I need to enumerate all the user defined types created in a SQL Server database with CREATE TYPE, and/or find out whether they have already been defined. With tables or stored procedures I'd do something like this: if exists (select * from dbo.sysobjects where name='foobar' and xtype='U') drop table foobar However I can't find the equivalent (or a suitable alternative) for user defined types! I definitely can't see them anywhere in sysobjects. Can anyone enlighten me?
Types and UDTs don't appear in sys.objects. You should be able to get what you're looking for with the following: select * from sys.types where is_user_defined = 1
How do I list user defined types in a SQL Server database? I need to enumerate all the user defined types created in a SQL Server database with CREATE TYPE, and/or find out whether they have already been defined. With tables or stored procedures I'd do something like this: if exists (select * from dbo.sysobjects where name='foobar' and xtype='U') drop table foobar However I can't find the equivalent (or a suitable alternative) for user defined types! I definitely can't see them anywhere in sysobjects. Can anyone enlighten me?
TITLE: How do I list user defined types in a SQL Server database? QUESTION: I need to enumerate all the user defined types created in a SQL Server database with CREATE TYPE, and/or find out whether they have already been defined. With tables or stored procedures I'd do something like this: if exists (select * from dbo.sysobjects where name='foobar' and xtype='U') drop table foobar However I can't find the equivalent (or a suitable alternative) for user defined types! I definitely can't see them anywhere in sysobjects. Can anyone enlighten me? ANSWER: Types and UDTs don't appear in sys.objects. You should be able to get what you're looking for with the following: select * from sys.types where is_user_defined = 1
[ "sql", "sql-server", "t-sql" ]
45
95
72,449
3
0
2008-09-10T15:48:21.803000
2008-09-10T15:53:12.230000
54,487
54,502
Conditional Number Formatting In Java
How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 Edit: I forgot to mention - I'm still on Java 1.4 - sorry!
If you use DecimalFormat and specify # in the pattern it only displays the value if it is not zero. See my question How do I format a number in java? Sample Code DecimalFormat format = new DecimalFormat("###.##"); double[] doubles = {123.45, 99.0, 23.2, 45.0}; for(int i=0;i
Conditional Number Formatting In Java How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 Edit: I forgot to mention - I'm still on Java 1.4 - sorry!
TITLE: Conditional Number Formatting In Java QUESTION: How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 Edit: I forgot to mention - I'm still on Java 1.4 - sorry! ANSWER: If you use DecimalFormat and specify # in the pattern it only displays the value if it is not zero. See my question How do I format a number in java? Sample Code DecimalFormat format = new DecimalFormat("###.##"); double[] doubles = {123.45, 99.0, 23.2, 45.0}; for(int i=0;i
[ "java", "java1.4" ]
5
6
2,064
3
0
2008-09-10T15:49:35.750000
2008-09-10T15:56:19.837000
54,503
65,070
Problem with .net app under linux, doesn't work from shell script
I'm working on a.net post-commit hook to feed data into OnTime via their Soap SDK. My hook works on Windows fine, but on our production RHEL4 subversion server, it won't work when called from a shell script. #!/bin/sh /usr/bin/mono $1/hooks/post-commit.exe "$@" When I execute it with parameters from the command line, it works properly. When executed via the shell script, I get the following error: (looks like there is some problem with the process execution of SVN that I use to get the log data for the revision): Unhandled Exception: System.InvalidOperationException: The process must exit before getting the requested information. at System.Diagnostics.Process.get_ExitCode () [0x0003f] in /tmp/monobuild/build/BUILD/mono-1.9.1/mcs/class/System/System.Diagnostics/Process.cs:149 at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode () at SVNLib.SVN.Execute (System.String sCMD, System.String sParams, System.String sComment, System.String sUserPwd, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.Log (System.String sUrl, Int32 nRevLow, Int32 nRevHigh, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.LogAsString (System.String sUrl, Int32 nRevLow, Int32 nRevHigh) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] I've tried using mkbundle and mkbundle2 to make a stand alone that could be named post-commit, but I get a different error message: Unhandled Exception: System.ArgumentNullException: Argument cannot be null. Parameter name: Value cannot be null. at System.Guid.CheckNull (System.Object o) [0x00000] at System.Guid..ctor (System.String g) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] Any ideas why it might be failing from a shell script or what might be wrong with the bundled version? Edit: @Herms, I've already tried it with an echo, and it looks right. As for the $1/hooks/post-commit.exe, I've tried the script with and without a full path to the.net assembly with the same results. Edit: @Leon, I've tried both $1 $2 and "$@" with the same results. It is a subversion post commit hook, and it takes two parameters, so those need to be passed along to the.net assembly. The "$@" was what was recommended at the mono site for calling a.net assembly from a shell script. The shell script is executing the.net assembly and with the correct parameters, but it is throwing an exception that does not get thrown when run directly from the command line. Edit: @Vinko, I don't see any differences in the environment other than things like BASH_LINENO and BASH_SOURCE Edit: @Luke, I tired it, but that makes no difference either. I first noticed the problem when testing from TortoiseSVN on my machine (when it runs as a sub-process of the subversion daemon), but also found that I get the same results when executing the script from the hooks directory (i.e../post-commit REPOS REV, where post-commit is the above sh script. Doing mono post-commit.exe REPOS REV works fine. The main problem is that to execute, I need to have something of the name post-commit so that it will be called. But it does not work from a shell script, and as noted above, the mkbundle is not working with a different problem.
It is normal for some processes to hang around for a while after they close their stdout (ie. you get an end-of-file reading from them). You need to call proc.WaitForExit() after reading all the data but before checking ExitCode.
Problem with .net app under linux, doesn't work from shell script I'm working on a.net post-commit hook to feed data into OnTime via their Soap SDK. My hook works on Windows fine, but on our production RHEL4 subversion server, it won't work when called from a shell script. #!/bin/sh /usr/bin/mono $1/hooks/post-commit.exe "$@" When I execute it with parameters from the command line, it works properly. When executed via the shell script, I get the following error: (looks like there is some problem with the process execution of SVN that I use to get the log data for the revision): Unhandled Exception: System.InvalidOperationException: The process must exit before getting the requested information. at System.Diagnostics.Process.get_ExitCode () [0x0003f] in /tmp/monobuild/build/BUILD/mono-1.9.1/mcs/class/System/System.Diagnostics/Process.cs:149 at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode () at SVNLib.SVN.Execute (System.String sCMD, System.String sParams, System.String sComment, System.String sUserPwd, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.Log (System.String sUrl, Int32 nRevLow, Int32 nRevHigh, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.LogAsString (System.String sUrl, Int32 nRevLow, Int32 nRevHigh) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] I've tried using mkbundle and mkbundle2 to make a stand alone that could be named post-commit, but I get a different error message: Unhandled Exception: System.ArgumentNullException: Argument cannot be null. Parameter name: Value cannot be null. at System.Guid.CheckNull (System.Object o) [0x00000] at System.Guid..ctor (System.String g) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] Any ideas why it might be failing from a shell script or what might be wrong with the bundled version? Edit: @Herms, I've already tried it with an echo, and it looks right. As for the $1/hooks/post-commit.exe, I've tried the script with and without a full path to the.net assembly with the same results. Edit: @Leon, I've tried both $1 $2 and "$@" with the same results. It is a subversion post commit hook, and it takes two parameters, so those need to be passed along to the.net assembly. The "$@" was what was recommended at the mono site for calling a.net assembly from a shell script. The shell script is executing the.net assembly and with the correct parameters, but it is throwing an exception that does not get thrown when run directly from the command line. Edit: @Vinko, I don't see any differences in the environment other than things like BASH_LINENO and BASH_SOURCE Edit: @Luke, I tired it, but that makes no difference either. I first noticed the problem when testing from TortoiseSVN on my machine (when it runs as a sub-process of the subversion daemon), but also found that I get the same results when executing the script from the hooks directory (i.e../post-commit REPOS REV, where post-commit is the above sh script. Doing mono post-commit.exe REPOS REV works fine. The main problem is that to execute, I need to have something of the name post-commit so that it will be called. But it does not work from a shell script, and as noted above, the mkbundle is not working with a different problem.
TITLE: Problem with .net app under linux, doesn't work from shell script QUESTION: I'm working on a.net post-commit hook to feed data into OnTime via their Soap SDK. My hook works on Windows fine, but on our production RHEL4 subversion server, it won't work when called from a shell script. #!/bin/sh /usr/bin/mono $1/hooks/post-commit.exe "$@" When I execute it with parameters from the command line, it works properly. When executed via the shell script, I get the following error: (looks like there is some problem with the process execution of SVN that I use to get the log data for the revision): Unhandled Exception: System.InvalidOperationException: The process must exit before getting the requested information. at System.Diagnostics.Process.get_ExitCode () [0x0003f] in /tmp/monobuild/build/BUILD/mono-1.9.1/mcs/class/System/System.Diagnostics/Process.cs:149 at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode () at SVNLib.SVN.Execute (System.String sCMD, System.String sParams, System.String sComment, System.String sUserPwd, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.Log (System.String sUrl, Int32 nRevLow, Int32 nRevHigh, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.LogAsString (System.String sUrl, Int32 nRevLow, Int32 nRevHigh) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] I've tried using mkbundle and mkbundle2 to make a stand alone that could be named post-commit, but I get a different error message: Unhandled Exception: System.ArgumentNullException: Argument cannot be null. Parameter name: Value cannot be null. at System.Guid.CheckNull (System.Object o) [0x00000] at System.Guid..ctor (System.String g) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] Any ideas why it might be failing from a shell script or what might be wrong with the bundled version? Edit: @Herms, I've already tried it with an echo, and it looks right. As for the $1/hooks/post-commit.exe, I've tried the script with and without a full path to the.net assembly with the same results. Edit: @Leon, I've tried both $1 $2 and "$@" with the same results. It is a subversion post commit hook, and it takes two parameters, so those need to be passed along to the.net assembly. The "$@" was what was recommended at the mono site for calling a.net assembly from a shell script. The shell script is executing the.net assembly and with the correct parameters, but it is throwing an exception that does not get thrown when run directly from the command line. Edit: @Vinko, I don't see any differences in the environment other than things like BASH_LINENO and BASH_SOURCE Edit: @Luke, I tired it, but that makes no difference either. I first noticed the problem when testing from TortoiseSVN on my machine (when it runs as a sub-process of the subversion daemon), but also found that I get the same results when executing the script from the hooks directory (i.e../post-commit REPOS REV, where post-commit is the above sh script. Doing mono post-commit.exe REPOS REV works fine. The main problem is that to execute, I need to have something of the name post-commit so that it will be called. But it does not work from a shell script, and as noted above, the mkbundle is not working with a different problem. ANSWER: It is normal for some processes to hang around for a while after they close their stdout (ie. you get an end-of-file reading from them). You need to call proc.WaitForExit() after reading all the data but before checking ExitCode.
[ ".net", "linux", "svn", "mono" ]
1
2
1,724
6
0
2008-09-10T15:56:33.297000
2008-09-15T17:47:13.413000
54,504
54,601
What is the best practice for writing Registry calls/File Sytem calls/Process creation filter for WinXP, Vista?
We needed to monitor all processes Registry calls/File Sytem calls/Process creations in the system (for the antivirus hips module). Also time by time it will be needed to delay some calls or decline them.
The supported method of doing this is RegNotifyChangeKeyValue Most virus checkers likely perform some sort of API hooking instead of using this function. There's lots of information out there about API hooking, like http://www.codeproject.com/KB/system/hooksys.aspx, http://www.codeguru.com/cpp/w-p/system/misc/article.php/c5667
What is the best practice for writing Registry calls/File Sytem calls/Process creation filter for WinXP, Vista? We needed to monitor all processes Registry calls/File Sytem calls/Process creations in the system (for the antivirus hips module). Also time by time it will be needed to delay some calls or decline them.
TITLE: What is the best practice for writing Registry calls/File Sytem calls/Process creation filter for WinXP, Vista? QUESTION: We needed to monitor all processes Registry calls/File Sytem calls/Process creations in the system (for the antivirus hips module). Also time by time it will be needed to delay some calls or decline them. ANSWER: The supported method of doing this is RegNotifyChangeKeyValue Most virus checkers likely perform some sort of API hooking instead of using this function. There's lots of information out there about API hooking, like http://www.codeproject.com/KB/system/hooksys.aspx, http://www.codeguru.com/cpp/w-p/system/misc/article.php/c5667
[ "winapi", "drivers" ]
1
1
139
1
0
2008-09-10T15:56:38.120000
2008-09-10T16:36:55.853000
54,512
54,517
How should I store short text strings into a SQL Server database?
varchar(255), varchar(256), nvarchar(255), nvarchar(256), nvarchar(max), etc? 256 seems like a nice, round, space-efficient number. But I've seen 255 used a lot. Why? What's the difference between varchar and nvarchar?
VARCHAR(255). It won't use all 255 characters of storage, just the storage you need. It's 255 and not 256 because then you have space for 255 plus the null-terminator (or size byte). The "N" is for Unicode. Use if you expect non-ASCII characters.
How should I store short text strings into a SQL Server database? varchar(255), varchar(256), nvarchar(255), nvarchar(256), nvarchar(max), etc? 256 seems like a nice, round, space-efficient number. But I've seen 255 used a lot. Why? What's the difference between varchar and nvarchar?
TITLE: How should I store short text strings into a SQL Server database? QUESTION: varchar(255), varchar(256), nvarchar(255), nvarchar(256), nvarchar(max), etc? 256 seems like a nice, round, space-efficient number. But I've seen 255 used a lot. Why? What's the difference between varchar and nvarchar? ANSWER: VARCHAR(255). It won't use all 255 characters of storage, just the storage you need. It's 255 and not 256 because then you have space for 255 plus the null-terminator (or size byte). The "N" is for Unicode. Use if you expect non-ASCII characters.
[ "sql", "sql-server", "database", "database-design" ]
13
11
14,217
8
0
2008-09-10T15:58:05.417000
2008-09-10T15:59:41.573000
54,522
54,544
Printing data into a preprinted form in C# .Net 3.5 SP1
I need to print out data into a pre-printed A6 form (1/4 the size of a landsacpe A4). I do not need to print paragraphs of text, just short lines scattered about on the page. All the stuff on MSDN is about priting paragraphs of text. Thanks for any help you can give, Roberto
When finding the x,y coordinates to use for lining up your new text with the pre-printed gaps, the default settings for the graphics object's Draw____() functions are 100 pixels per inch. That might be subject to change based on your printer, but in my (very limited) experience that's always been the case.
Printing data into a preprinted form in C# .Net 3.5 SP1 I need to print out data into a pre-printed A6 form (1/4 the size of a landsacpe A4). I do not need to print paragraphs of text, just short lines scattered about on the page. All the stuff on MSDN is about priting paragraphs of text. Thanks for any help you can give, Roberto
TITLE: Printing data into a preprinted form in C# .Net 3.5 SP1 QUESTION: I need to print out data into a pre-printed A6 form (1/4 the size of a landsacpe A4). I do not need to print paragraphs of text, just short lines scattered about on the page. All the stuff on MSDN is about priting paragraphs of text. Thanks for any help you can give, Roberto ANSWER: When finding the x,y coordinates to use for lining up your new text with the pre-printed gaps, the default settings for the graphics object's Draw____() functions are 100 pixels per inch. That might be subject to change based on your printer, but in my (very limited) experience that's always been the case.
[ "c#", ".net", "printing" ]
3
2
2,741
2
0
2008-09-10T16:01:23.397000
2008-09-10T16:10:05.373000
54,546
55,560
How can I indicate that multiple versions of a dependent assembly are okay?
Assemblies A and B are privately deployed and strongly named. Assembly A contains references to Assembly B. There are two versions of Assembly B: B1 and B2. I want to be able to indicate for Assembly A that it may bind to either B1 or B2 -- ideally, by incorporating this information into the assembly itself. What are my options? I'm somewhat familiar with versioning policy and the way it applies to the GAC, but I don't want to be dependent on these assemblies being in the GAC.
There are several places you can indicate to the.Net Framework that a specific version of a strongly typed library should be preferred over another. These are: Publisher Policy file machine.config file app.config file All these methods utilise the " " element which can instruct the.Net Framework to bind a version or range of versions of an assembly to a specific version. Here is a short example of the tag in use to bind all versions of an assembly up until version 2.0 to version 2.5: There are lots of details so it's best if you read about Redirecting Assembly Versions on MSDN to decide which method is best for your case.
How can I indicate that multiple versions of a dependent assembly are okay? Assemblies A and B are privately deployed and strongly named. Assembly A contains references to Assembly B. There are two versions of Assembly B: B1 and B2. I want to be able to indicate for Assembly A that it may bind to either B1 or B2 -- ideally, by incorporating this information into the assembly itself. What are my options? I'm somewhat familiar with versioning policy and the way it applies to the GAC, but I don't want to be dependent on these assemblies being in the GAC.
TITLE: How can I indicate that multiple versions of a dependent assembly are okay? QUESTION: Assemblies A and B are privately deployed and strongly named. Assembly A contains references to Assembly B. There are two versions of Assembly B: B1 and B2. I want to be able to indicate for Assembly A that it may bind to either B1 or B2 -- ideally, by incorporating this information into the assembly itself. What are my options? I'm somewhat familiar with versioning policy and the way it applies to the GAC, but I don't want to be dependent on these assemblies being in the GAC. ANSWER: There are several places you can indicate to the.Net Framework that a specific version of a strongly typed library should be preferred over another. These are: Publisher Policy file machine.config file app.config file All these methods utilise the " " element which can instruct the.Net Framework to bind a version or range of versions of an assembly to a specific version. Here is a short example of the tag in use to bind all versions of an assembly up until version 2.0 to version 2.5: There are lots of details so it's best if you read about Redirecting Assembly Versions on MSDN to decide which method is best for your case.
[ ".net", "versioning" ]
3
2
1,275
2
0
2008-09-10T16:10:35.433000
2008-09-11T00:32:11.997000
54,566
54,572
Call to a member function on a non-object
So I'm refactoring my code to implement more OOP. I set up a class to hold page attributes. class PageAtrributes { private $db_connection; private $page_title; public function __construct($db_connection) { $this->db_connection = $db_connection; $this->page_title = ''; } public function get_page_title() { return $this->page_title; } public function set_page_title($page_title) { $this->page_title = $page_title; } } Later on I call the set_page_title() function like so function page_properties($objPortal) { $objPage->set_page_title($myrow['title']); } When I do I receive the error message: Call to a member function set_page_title() on a non-object So what am I missing?
It means that $objPage is not an instance of an object. Can we see the code you used to initialize the variable? As you expect a specific object type, you can also make use of PHPs type-hinting feature Docs to get the error when your logic is violated: function page_properties(PageAtrributes $objPortal) {... $objPage->set_page_title($myrow['title']); } This function will only accept PageAtrributes for the first parameter.
Call to a member function on a non-object So I'm refactoring my code to implement more OOP. I set up a class to hold page attributes. class PageAtrributes { private $db_connection; private $page_title; public function __construct($db_connection) { $this->db_connection = $db_connection; $this->page_title = ''; } public function get_page_title() { return $this->page_title; } public function set_page_title($page_title) { $this->page_title = $page_title; } } Later on I call the set_page_title() function like so function page_properties($objPortal) { $objPage->set_page_title($myrow['title']); } When I do I receive the error message: Call to a member function set_page_title() on a non-object So what am I missing?
TITLE: Call to a member function on a non-object QUESTION: So I'm refactoring my code to implement more OOP. I set up a class to hold page attributes. class PageAtrributes { private $db_connection; private $page_title; public function __construct($db_connection) { $this->db_connection = $db_connection; $this->page_title = ''; } public function get_page_title() { return $this->page_title; } public function set_page_title($page_title) { $this->page_title = $page_title; } } Later on I call the set_page_title() function like so function page_properties($objPortal) { $objPage->set_page_title($myrow['title']); } When I do I receive the error message: Call to a member function set_page_title() on a non-object So what am I missing? ANSWER: It means that $objPage is not an instance of an object. Can we see the code you used to initialize the variable? As you expect a specific object type, you can also make use of PHPs type-hinting feature Docs to get the error when your logic is violated: function page_properties(PageAtrributes $objPortal) {... $objPage->set_page_title($myrow['title']); } This function will only accept PageAtrributes for the first parameter.
[ "php" ]
84
50
382,377
8
0
2008-09-10T16:19:57.267000
2008-09-10T16:21:28.027000
54,567
54,799
How do I get InputVerifier to work with an editable JComboBox
I've got an JComboBox with a custom inputVerifyer set to limit MaxLength when it's set to editable. The verify method never seems to get called. The same verifyer gets invoked on a JTextField fine. What might I be doing wrong?
I found a workaround. I thought I'd let the next person with this problem know about. Basically. Instead of setting the inputVerifier on the ComboBox you set it to it's "Editor Component". JComboBox combo = new JComboBox(); JTextField tf = (JTextField)(combo.getEditor().getEditorComponent()); tf.setInputVerifier(verifyer);
How do I get InputVerifier to work with an editable JComboBox I've got an JComboBox with a custom inputVerifyer set to limit MaxLength when it's set to editable. The verify method never seems to get called. The same verifyer gets invoked on a JTextField fine. What might I be doing wrong?
TITLE: How do I get InputVerifier to work with an editable JComboBox QUESTION: I've got an JComboBox with a custom inputVerifyer set to limit MaxLength when it's set to editable. The verify method never seems to get called. The same verifyer gets invoked on a JTextField fine. What might I be doing wrong? ANSWER: I found a workaround. I thought I'd let the next person with this problem know about. Basically. Instead of setting the inputVerifier on the ComboBox you set it to it's "Editor Component". JComboBox combo = new JComboBox(); JTextField tf = (JTextField)(combo.getEditor().getEditorComponent()); tf.setInputVerifier(verifyer);
[ "java", "swing", "jcombobox" ]
4
8
3,337
2
0
2008-09-10T16:20:21.167000
2008-09-10T17:38:35.067000
54,578
54,862
How to capture output of "pnputil.exe -e"
How do I capture the output of "%windir%/system32/pnputil.exe -e"? (assume windows vista 32-bit) Bonus for technical explanation of why the app normally writes output to the cmd shell, but when stdout and/or stderr are redirected then the app writes nothing to the console or to stdout/stderr? C:\Windows\System32>PnPutil.exe --help Microsoft PnP Utility {...} C:\Windows\System32>pnputil -e > c:\foo.txt C:\Windows\System32>type c:\foo.txt C:\Windows\System32>dir c:\foo.txt Volume in drive C has no label. Volume Serial Number is XXXX-XXXX Directory of c:\ 09/10/2008 12:10 PM 0 foo.txt 1 File(s) 0 bytes
Doesn't seem like there is an easy way at all. You would have to start hooking the call to WriteConsole and dumping the string buffers. See this post for a similar discussion. Of course, if this is a one off for interactive use then just select all the output from the command window and copy it to the clipboard. (Make sure you cmd window buffer is big enough to store all the output).
How to capture output of "pnputil.exe -e" How do I capture the output of "%windir%/system32/pnputil.exe -e"? (assume windows vista 32-bit) Bonus for technical explanation of why the app normally writes output to the cmd shell, but when stdout and/or stderr are redirected then the app writes nothing to the console or to stdout/stderr? C:\Windows\System32>PnPutil.exe --help Microsoft PnP Utility {...} C:\Windows\System32>pnputil -e > c:\foo.txt C:\Windows\System32>type c:\foo.txt C:\Windows\System32>dir c:\foo.txt Volume in drive C has no label. Volume Serial Number is XXXX-XXXX Directory of c:\ 09/10/2008 12:10 PM 0 foo.txt 1 File(s) 0 bytes
TITLE: How to capture output of "pnputil.exe -e" QUESTION: How do I capture the output of "%windir%/system32/pnputil.exe -e"? (assume windows vista 32-bit) Bonus for technical explanation of why the app normally writes output to the cmd shell, but when stdout and/or stderr are redirected then the app writes nothing to the console or to stdout/stderr? C:\Windows\System32>PnPutil.exe --help Microsoft PnP Utility {...} C:\Windows\System32>pnputil -e > c:\foo.txt C:\Windows\System32>type c:\foo.txt C:\Windows\System32>dir c:\foo.txt Volume in drive C has no label. Volume Serial Number is XXXX-XXXX Directory of c:\ 09/10/2008 12:10 PM 0 foo.txt 1 File(s) 0 bytes ANSWER: Doesn't seem like there is an easy way at all. You would have to start hooking the call to WriteConsole and dumping the string buffers. See this post for a similar discussion. Of course, if this is a one off for interactive use then just select all the output from the command window and copy it to the clipboard. (Make sure you cmd window buffer is big enough to store all the output).
[ "windows" ]
3
1
10,467
9
0
2008-09-10T16:23:47.663000
2008-09-10T17:58:39.167000
54,579
292,810
WCF Configuration without a config file
Does anyone know of a good example of how to expose a WCF service programatically without the use of a configuration file? I know the service object model is much richer now with WCF, so I know it's possible. I just have not seen an example of how to do so. Conversely, I would like to see how consuming without a configuration file is done as well. Before anyone asks, I have a very specific need to do this without configuration files. I would normally not recommend such a practice, but as I said, there is a very specific need in this case.
Consuming a web service without a config file is very simple, as I've discovered. You simply need to create a binding object and address object and pass them either to the constructor of the client proxy or to a generic ChannelFactory instance. You can look at the default app.config to see what settings to use, then create a static helper method somewhere that instantiates your proxy: internal static MyServiceSoapClient CreateWebServiceInstance() { BasicHttpBinding binding = new BasicHttpBinding(); // I think most (or all) of these are defaults--I just copied them from app.config: binding.SendTimeout = TimeSpan.FromMinutes( 1 ); binding.OpenTimeout = TimeSpan.FromMinutes( 1 ); binding.CloseTimeout = TimeSpan.FromMinutes( 1 ); binding.ReceiveTimeout = TimeSpan.FromMinutes( 10 ); binding.AllowCookies = false; binding.BypassProxyOnLocal = false; binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; binding.MessageEncoding = WSMessageEncoding.Text; binding.TextEncoding = System.Text.Encoding.UTF8; binding.TransferMode = TransferMode.Buffered; binding.UseDefaultWebProxy = true; return new MyServiceSoapClient( binding, new EndpointAddress( "http://www.mysite.com/MyService.asmx" ) ); }
WCF Configuration without a config file Does anyone know of a good example of how to expose a WCF service programatically without the use of a configuration file? I know the service object model is much richer now with WCF, so I know it's possible. I just have not seen an example of how to do so. Conversely, I would like to see how consuming without a configuration file is done as well. Before anyone asks, I have a very specific need to do this without configuration files. I would normally not recommend such a practice, but as I said, there is a very specific need in this case.
TITLE: WCF Configuration without a config file QUESTION: Does anyone know of a good example of how to expose a WCF service programatically without the use of a configuration file? I know the service object model is much richer now with WCF, so I know it's possible. I just have not seen an example of how to do so. Conversely, I would like to see how consuming without a configuration file is done as well. Before anyone asks, I have a very specific need to do this without configuration files. I would normally not recommend such a practice, but as I said, there is a very specific need in this case. ANSWER: Consuming a web service without a config file is very simple, as I've discovered. You simply need to create a binding object and address object and pass them either to the constructor of the client proxy or to a generic ChannelFactory instance. You can look at the default app.config to see what settings to use, then create a static helper method somewhere that instantiates your proxy: internal static MyServiceSoapClient CreateWebServiceInstance() { BasicHttpBinding binding = new BasicHttpBinding(); // I think most (or all) of these are defaults--I just copied them from app.config: binding.SendTimeout = TimeSpan.FromMinutes( 1 ); binding.OpenTimeout = TimeSpan.FromMinutes( 1 ); binding.CloseTimeout = TimeSpan.FromMinutes( 1 ); binding.ReceiveTimeout = TimeSpan.FromMinutes( 10 ); binding.AllowCookies = false; binding.BypassProxyOnLocal = false; binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; binding.MessageEncoding = WSMessageEncoding.Text; binding.TextEncoding = System.Text.Encoding.UTF8; binding.TransferMode = TransferMode.Buffered; binding.UseDefaultWebProxy = true; return new MyServiceSoapClient( binding, new EndpointAddress( "http://www.mysite.com/MyService.asmx" ) ); }
[ "wcf", "configuration", "configuration-files" ]
90
117
76,898
7
0
2008-09-10T16:25:29.267000
2008-11-15T17:43:06.027000
54,585
54,596
When should you use a class vs a struct in C++?
In what scenarios is it better to use a struct vs a class in C++?
The differences between a class and a struct in C++ are: struct members and base classes/structs are public by default. class members and base classes/structs are private by default. Both classes and structs can have a mixture of public, protected and private members, can use inheritance, and can have member functions. I would recommend you: use struct for plain-old-data structures without any class-like features; use class when you make use of features such as private or protected members, non-default constructors and operators, etc.
When should you use a class vs a struct in C++? In what scenarios is it better to use a struct vs a class in C++?
TITLE: When should you use a class vs a struct in C++? QUESTION: In what scenarios is it better to use a struct vs a class in C++? ANSWER: The differences between a class and a struct in C++ are: struct members and base classes/structs are public by default. class members and base classes/structs are private by default. Both classes and structs can have a mixture of public, protected and private members, can use inheritance, and can have member functions. I would recommend you: use struct for plain-old-data structures without any class-like features; use class when you make use of features such as private or protected members, non-default constructors and operators, etc.
[ "c++", "oop", "class", "struct", "ooad" ]
1,268
1,097
621,076
27
0
2008-09-10T16:29:54.217000
2008-09-10T16:35:23.003000
54,586
57,551
Solid Config for webdev in emacs under linux AND windows?
I have a windows laptop (thinkpad) and somewhat recently rediscovered emacs and the benefit that all those wacky shortcuts can be when the arrow keys are located somewhere near you right armpit. I was discouraged after php-mode, css-mode, etc, under mmm-mode was inconsistent, buggy, and refused to properly interpret some of my files. (In all fairness, I'm most likely doin' it wrong) So I eventually found the nxhtml package which worked pretty well. However, nxhtml causes weird bugs and actually crashes on certain files (certain combinations of nested modes I supposed) under linux! (using Ubuntu 7.10 and Kubuntu 8.04) I'd like to be able to work on the laptop as well as the home linux pc without having to deal with inconsistent implementations of something that shouldn't be this hard. I've googled and looked around and there's a good chance I'm the only human on the planet having these problems... Anyone got some advice? (in lieu of an emacs solutions, a good enough cross-platform lightweight text editor with the dev features would also work I suppose...)
Although I use emacs when I have to (ie. when I'm at the command line), I use Eclipse for all my real development work. If you get the Web Standards Toolkit plug-in for it, it can do syntax coloring, tag auto-completion, and other fun stuff. Alternatively, if Eclipse is to "heavy" for you, jEdit is another excellent program for doing web development (it has most of it's web dev support built in, but you can also get some additional plug-ins for features like HTML Tidy). Both programs are open source and Java-based, which means they're both free and run on (virtually) any platform.
Solid Config for webdev in emacs under linux AND windows? I have a windows laptop (thinkpad) and somewhat recently rediscovered emacs and the benefit that all those wacky shortcuts can be when the arrow keys are located somewhere near you right armpit. I was discouraged after php-mode, css-mode, etc, under mmm-mode was inconsistent, buggy, and refused to properly interpret some of my files. (In all fairness, I'm most likely doin' it wrong) So I eventually found the nxhtml package which worked pretty well. However, nxhtml causes weird bugs and actually crashes on certain files (certain combinations of nested modes I supposed) under linux! (using Ubuntu 7.10 and Kubuntu 8.04) I'd like to be able to work on the laptop as well as the home linux pc without having to deal with inconsistent implementations of something that shouldn't be this hard. I've googled and looked around and there's a good chance I'm the only human on the planet having these problems... Anyone got some advice? (in lieu of an emacs solutions, a good enough cross-platform lightweight text editor with the dev features would also work I suppose...)
TITLE: Solid Config for webdev in emacs under linux AND windows? QUESTION: I have a windows laptop (thinkpad) and somewhat recently rediscovered emacs and the benefit that all those wacky shortcuts can be when the arrow keys are located somewhere near you right armpit. I was discouraged after php-mode, css-mode, etc, under mmm-mode was inconsistent, buggy, and refused to properly interpret some of my files. (In all fairness, I'm most likely doin' it wrong) So I eventually found the nxhtml package which worked pretty well. However, nxhtml causes weird bugs and actually crashes on certain files (certain combinations of nested modes I supposed) under linux! (using Ubuntu 7.10 and Kubuntu 8.04) I'd like to be able to work on the laptop as well as the home linux pc without having to deal with inconsistent implementations of something that shouldn't be this hard. I've googled and looked around and there's a good chance I'm the only human on the planet having these problems... Anyone got some advice? (in lieu of an emacs solutions, a good enough cross-platform lightweight text editor with the dev features would also work I suppose...) ANSWER: Although I use emacs when I have to (ie. when I'm at the command line), I use Eclipse for all my real development work. If you get the Web Standards Toolkit plug-in for it, it can do syntax coloring, tag auto-completion, and other fun stuff. Alternatively, if Eclipse is to "heavy" for you, jEdit is another excellent program for doing web development (it has most of it's web dev support built in, but you can also get some additional plug-ins for features like HTML Tidy). Both programs are open source and Java-based, which means they're both free and run on (virtually) any platform.
[ "windows", "linux", "emacs", "cross-platform" ]
3
1
575
4
0
2008-09-10T16:30:43.123000
2008-09-11T20:34:41.063000
54,612
55,793
Fast Disk Cloning
Is there a way to have Linux read ahead when cloning a disk? I use the program named "dd" to clone disks. The last time I did this it seemed as though the OS was reading then writing but never at the same time. Ideally, the destination disk would be constantly writing without waiting that's of course if the source disk can keep up. UPDATE: I normally choose a large block size when cloning (ex. 16M or 32MB).
Commodore Jaeger is right about: dd if=/dev/sda of=/dev/sdb bs=1M Also, adjusting "readahead" on the drives usually improves performance. The default may be something like 256, and optimal 1024. Each setup is different, so you would have to run benchmarks to find the best value. # blockdev --getra /dev/sda 256 # blockdev --setra 1024 /dev/sda # blockdev --getra /dev/sda 1024 # blockdev --help Usage: blockdev -V blockdev --report [devices] blockdev [-v|-q] commands devices Available commands: --getsz (get size in 512-byte sectors) --setro (set read-only) --setrw (set read-write) --getro (get read-only) --getss (get sectorsize) --getbsz (get blocksize) --setbsz BLOCKSIZE (set blocksize) --getsize (get 32-bit sector count) --getsize64 (get size in bytes) --setra READAHEAD (set readahead) --getra (get readahead) --flushbufs (flush buffers) --rereadpt (reread partition table) --rmpart PARTNO (disable partition) --rmparts (disable all partitions) #
Fast Disk Cloning Is there a way to have Linux read ahead when cloning a disk? I use the program named "dd" to clone disks. The last time I did this it seemed as though the OS was reading then writing but never at the same time. Ideally, the destination disk would be constantly writing without waiting that's of course if the source disk can keep up. UPDATE: I normally choose a large block size when cloning (ex. 16M or 32MB).
TITLE: Fast Disk Cloning QUESTION: Is there a way to have Linux read ahead when cloning a disk? I use the program named "dd" to clone disks. The last time I did this it seemed as though the OS was reading then writing but never at the same time. Ideally, the destination disk would be constantly writing without waiting that's of course if the source disk can keep up. UPDATE: I normally choose a large block size when cloning (ex. 16M or 32MB). ANSWER: Commodore Jaeger is right about: dd if=/dev/sda of=/dev/sdb bs=1M Also, adjusting "readahead" on the drives usually improves performance. The default may be something like 256, and optimal 1024. Each setup is different, so you would have to run benchmarks to find the best value. # blockdev --getra /dev/sda 256 # blockdev --setra 1024 /dev/sda # blockdev --getra /dev/sda 1024 # blockdev --help Usage: blockdev -V blockdev --report [devices] blockdev [-v|-q] commands devices Available commands: --getsz (get size in 512-byte sectors) --setro (set read-only) --setrw (set read-write) --getro (get read-only) --getss (get sectorsize) --getbsz (get blocksize) --setbsz BLOCKSIZE (set blocksize) --getsize (get 32-bit sector count) --getsize64 (get size in bytes) --setra READAHEAD (set readahead) --getra (get readahead) --flushbufs (flush buffers) --rereadpt (reread partition table) --rmpart PARTNO (disable partition) --rmparts (disable all partitions) #
[ "linux", "storage", "dd", "parallel-io" ]
7
10
21,536
7
0
2008-09-10T16:39:14.600000
2008-09-11T04:14:15.023000