question_id
int64
4
6.31M
answer_id
int64
7
6.31M
title
stringlengths
9
150
question_body
stringlengths
0
28.8k
answer_body
stringlengths
60
27.2k
question_text
stringlengths
40
28.9k
combined_text
stringlengths
124
39.6k
tags
listlengths
1
6
question_score
int64
0
26.3k
answer_score
int64
0
28.8k
view_count
int64
15
14M
answer_count
int64
0
182
favorite_count
int64
0
32
question_creation_date
stringdate
2008-07-31 21:42:52
2011-06-10 18:12:18
answer_creation_date
stringdate
2008-07-31 22:17:57
2011-06-10 18:14:17
14,857
96,161
Why does VS 2005 keep giving me the "'x' is ambiguous in the namespace 'y'" error?
I'm not sure what VS setting I've changed or if it's a web.config setting or what, but I keep getting this error in the error list and yet all solutions build fine. Here are some examples: Error 5 'CompilerGlobalScopeAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'. C:\projects\MyProject\Web\Controls\EmailStory.ascx 609 184 C:\...\Web\ Error 6 'ArrayList' is ambiguous in the namespace 'System.Collections'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 13 28 C:\...\Web\ Error 7 'Exception' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 37 21 C:\...\Web\ Error 8 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 47 64 C:\...\Web\ Error 9 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 140 72 C:\...\Web\ Error 10 'Array' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 147 35 C:\...\Web\ [...etc...] Error 90 'DateTime' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\App_Code\XsltHelperFunctions.vb 13 8 C:\...\Web\ As you can imagine, it's really annoying since there are blue squiggly underlines everywhere in the code, and filtering out relevant errors in the Error List pane is near impossible. I've checked the default ASP.Net web.config and machine.config but nothing seemed to stand out there. Edit: Here's some of the source where the errors are occurring: 'Error #5: whole line is blue underlined' <%= addEmailToList.ToolTip %> 'Error #6: ArrayList is blue underlined' Private _emails As New ArrayList() 'Error #7: Exception is blue underlined' Catch ex As Exception 'Error #8: System.EventArgs is blue underlined' Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'Error #9: System.EventArgs is blue underlined' Protected Sub sendMessage_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles sendMessage.Click 'Error #10: Array is blue underlined' Me.emailSentTo.Text = Array.Join(";", mailToAddresses) 'Error #90: DateTime is blue underlined' If DateTime.TryParse(data, dateValue) Then Edit: GacUtil results C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\gacutil -l mscorlib Microsoft (R).NET Global Assembly Cache Utility. Version 1.1.4318.0 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. The Global Assembly Cache contains the following assemblies: The cache of ngen files contains the following entries: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619 34e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d003700430039004 40037004500430036000000 mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619 34e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00370043003 900450036003100370035000000 Number of items = 2 "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" -l mscorlib Microsoft (R).NET Global Assembly Cache Utility. Version 2.0.50727.42 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: Number of items = 0 Edit: interesting results from ngen: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ngen display mscorlib /verbose Microsoft (R) CLR Native Image Generator - Version 2.0.50727.832 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. NGEN Roots: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 ScenarioDefault mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B} Hard Dependencies: Soft Dependencies: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 ScenarioNoDependencies mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B} Hard Dependencies: Soft Dependencies: NGEN Roots that depend on "mscorlib": [...a bunch of stuff...] Native Images: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec There should only be one mscorlib in the native images, correct? How can I get rid of the others?
Based on the results of your gacutil output (thanks for doing that; I think it helps), I would say you need to try and run a repair on the.NET Framework install and Visual Studio 2005. I'm not sure if that will fix it, but as you can see from the output of the gacutil, you have none for 2.0. From my VS2005 Command Prompt, I get: Microsoft (R).NET Global Assembly Cache Utility. Version 2.0.50727.42 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86 Number of items = 1 From my VS2003 Command Prompt, I get: Microsoft (R).NET Global Assembly Cache Utility. Version 1.1.4322.573 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. The Global Assembly Cache contains the following assemblies: The cache of ngen files contains the following entries: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00330037004200440036004600430034000000 Number of items = 2
Why does VS 2005 keep giving me the "'x' is ambiguous in the namespace 'y'" error? I'm not sure what VS setting I've changed or if it's a web.config setting or what, but I keep getting this error in the error list and yet all solutions build fine. Here are some examples: Error 5 'CompilerGlobalScopeAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'. C:\projects\MyProject\Web\Controls\EmailStory.ascx 609 184 C:\...\Web\ Error 6 'ArrayList' is ambiguous in the namespace 'System.Collections'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 13 28 C:\...\Web\ Error 7 'Exception' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 37 21 C:\...\Web\ Error 8 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 47 64 C:\...\Web\ Error 9 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 140 72 C:\...\Web\ Error 10 'Array' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 147 35 C:\...\Web\ [...etc...] Error 90 'DateTime' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\App_Code\XsltHelperFunctions.vb 13 8 C:\...\Web\ As you can imagine, it's really annoying since there are blue squiggly underlines everywhere in the code, and filtering out relevant errors in the Error List pane is near impossible. I've checked the default ASP.Net web.config and machine.config but nothing seemed to stand out there. Edit: Here's some of the source where the errors are occurring: 'Error #5: whole line is blue underlined' <%= addEmailToList.ToolTip %> 'Error #6: ArrayList is blue underlined' Private _emails As New ArrayList() 'Error #7: Exception is blue underlined' Catch ex As Exception 'Error #8: System.EventArgs is blue underlined' Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'Error #9: System.EventArgs is blue underlined' Protected Sub sendMessage_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles sendMessage.Click 'Error #10: Array is blue underlined' Me.emailSentTo.Text = Array.Join(";", mailToAddresses) 'Error #90: DateTime is blue underlined' If DateTime.TryParse(data, dateValue) Then Edit: GacUtil results C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\gacutil -l mscorlib Microsoft (R).NET Global Assembly Cache Utility. Version 1.1.4318.0 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. The Global Assembly Cache contains the following assemblies: The cache of ngen files contains the following entries: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619 34e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d003700430039004 40037004500430036000000 mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619 34e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00370043003 900450036003100370035000000 Number of items = 2 "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" -l mscorlib Microsoft (R).NET Global Assembly Cache Utility. Version 2.0.50727.42 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: Number of items = 0 Edit: interesting results from ngen: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ngen display mscorlib /verbose Microsoft (R) CLR Native Image Generator - Version 2.0.50727.832 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. NGEN Roots: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 ScenarioDefault mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B} Hard Dependencies: Soft Dependencies: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 ScenarioNoDependencies mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B} Hard Dependencies: Soft Dependencies: NGEN Roots that depend on "mscorlib": [...a bunch of stuff...] Native Images: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec There should only be one mscorlib in the native images, correct? How can I get rid of the others?
TITLE: Why does VS 2005 keep giving me the "'x' is ambiguous in the namespace 'y'" error? QUESTION: I'm not sure what VS setting I've changed or if it's a web.config setting or what, but I keep getting this error in the error list and yet all solutions build fine. Here are some examples: Error 5 'CompilerGlobalScopeAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'. C:\projects\MyProject\Web\Controls\EmailStory.ascx 609 184 C:\...\Web\ Error 6 'ArrayList' is ambiguous in the namespace 'System.Collections'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 13 28 C:\...\Web\ Error 7 'Exception' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 37 21 C:\...\Web\ Error 8 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 47 64 C:\...\Web\ Error 9 'EventArgs' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 140 72 C:\...\Web\ Error 10 'Array' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\Controls\EmailStory.ascx.vb 147 35 C:\...\Web\ [...etc...] Error 90 'DateTime' is ambiguous in the namespace 'System'. C:\projects\MyProject\Web\App_Code\XsltHelperFunctions.vb 13 8 C:\...\Web\ As you can imagine, it's really annoying since there are blue squiggly underlines everywhere in the code, and filtering out relevant errors in the Error List pane is near impossible. I've checked the default ASP.Net web.config and machine.config but nothing seemed to stand out there. Edit: Here's some of the source where the errors are occurring: 'Error #5: whole line is blue underlined' <%= addEmailToList.ToolTip %> 'Error #6: ArrayList is blue underlined' Private _emails As New ArrayList() 'Error #7: Exception is blue underlined' Catch ex As Exception 'Error #8: System.EventArgs is blue underlined' Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'Error #9: System.EventArgs is blue underlined' Protected Sub sendMessage_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles sendMessage.Click 'Error #10: Array is blue underlined' Me.emailSentTo.Text = Array.Join(";", mailToAddresses) 'Error #90: DateTime is blue underlined' If DateTime.TryParse(data, dateValue) Then Edit: GacUtil results C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\gacutil -l mscorlib Microsoft (R).NET Global Assembly Cache Utility. Version 1.1.4318.0 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. The Global Assembly Cache contains the following assemblies: The cache of ngen files contains the following entries: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619 34e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d003700430039004 40037004500430036000000 mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c5619 34e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00370043003 900450036003100370035000000 Number of items = 2 "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" -l mscorlib Microsoft (R).NET Global Assembly Cache Utility. Version 2.0.50727.42 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: Number of items = 0 Edit: interesting results from ngen: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ngen display mscorlib /verbose Microsoft (R) CLR Native Image Generator - Version 2.0.50727.832 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. NGEN Roots: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 ScenarioDefault mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B} Hard Dependencies: Soft Dependencies: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 ScenarioNoDependencies mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 DisplayName = mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Native image = {7681CE0F-F0E7-F03A-2B56-96345589D82B} Hard Dependencies: Soft Dependencies: NGEN Roots that depend on "mscorlib": [...a bunch of stuff...] Native Images: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source MVID: {D34102CF-2ABF-4004-8B42-2859D8FF27F3} Source HASH: bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec NGen GUID sign: {7681CE0F-F0E7-F03A-2B56-96345589D82B} OS: WinNT Processor: x86(Pentium 4) (features: 00008001) Runtime: 2.0.50727.832 mscorwks.dll: TimeStamp=461F2E2A, CheckSum=00566DC9 Flags: Scenarios: Granted set: File: C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\0fce8176e7f03af02b5696345589d82b\mscorlib.ni.dll Dependencies: mscorlib, Version=2.0.0.0, PublicKeyToken=b77a5c561934e089: Guid:{D34102CF-2ABF-4004-8B42-2859D8FF27F3} Sign:bbf5cfc19bea4e13889e39eb1fb72479a45ad0ec There should only be one mscorlib in the native images, correct? How can I get rid of the others? ANSWER: Based on the results of your gacutil output (thanks for doing that; I think it helps), I would say you need to try and run a repair on the.NET Framework install and Visual Studio 2005. I'm not sure if that will fix it, but as you can see from the output of the gacutil, you have none for 2.0. From my VS2005 Command Prompt, I get: Microsoft (R).NET Global Assembly Cache Utility. Version 2.0.50727.42 Copyright (c) Microsoft Corporation. All rights reserved. The Global Assembly Cache contains the following assemblies: mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86 Number of items = 1 From my VS2003 Command Prompt, I get: Microsoft (R).NET Global Assembly Cache Utility. Version 1.1.4322.573 Copyright (C) Microsoft Corporation 1998-2002. All rights reserved. The Global Assembly Cache contains the following assemblies: The cache of ngen files contains the following entries: mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d003800460053002d00330037004200430043003300430035000000 mscorlib, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, Custom=5a00410050002d004e0035002e0031002d0038004600440053002d00330037004200440036004600430034000000 Number of items = 2
[ ".net", "visual-studio", "visual-studio-2005", "namespaces" ]
2
1
2,720
8
0
2008-08-18T16:18:31.430000
2008-09-18T19:43:00.240000
14,872
39,514
[ADO.NET error]: CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file HelloWorld.mdf failed
CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file C:\Documents and Settings\..\App_Data\HelloWorld.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. I've found these links: http://blog.benhall.me.uk/2008/03/sql-server-and-vista-create-database.html http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=702726&SiteID=1
Generally the user that you are using to run the SQL Server service will not have access to your personal user folders, that is why you're getting the error. You either need to change the credentials used for the service, or move the database to another folder, which did the trick in your case.
[ADO.NET error]: CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file HelloWorld.mdf failed CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file C:\Documents and Settings\..\App_Data\HelloWorld.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. I've found these links: http://blog.benhall.me.uk/2008/03/sql-server-and-vista-create-database.html http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=702726&SiteID=1
TITLE: [ADO.NET error]: CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file HelloWorld.mdf failed QUESTION: CREATE DATABASE permission denied in database 'master'. An attempt to attach an auto-named database for file C:\Documents and Settings\..\App_Data\HelloWorld.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. I've found these links: http://blog.benhall.me.uk/2008/03/sql-server-and-vista-create-database.html http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=702726&SiteID=1 ANSWER: Generally the user that you are using to run the SQL Server service will not have access to your personal user folders, that is why you're getting the error. You either need to change the credentials used for the service, or move the database to another folder, which did the trick in your case.
[ "sql-server", "database", "ado.net", "sql-server-express" ]
6
4
13,804
7
0
2008-08-18T16:33:46.890000
2008-09-02T13:29:56.573000
14,873
14,972
How do I display database query statistics on Wordpress site?
I've noticed that a few Wordpress blogs have query statistics present in their footer that simply state the number of queries and the total time required to process them for the particular page, reading something like: 23 queries. 0.448 seconds I was wondering how this is accomplished. Is it through the use of a particular Wordpress plug-in or perhaps from using some particular php function in the page's code?
Try adding this to the bottom of the footer in your template: num_queries;?>.
How do I display database query statistics on Wordpress site? I've noticed that a few Wordpress blogs have query statistics present in their footer that simply state the number of queries and the total time required to process them for the particular page, reading something like: 23 queries. 0.448 seconds I was wondering how this is accomplished. Is it through the use of a particular Wordpress plug-in or perhaps from using some particular php function in the page's code?
TITLE: How do I display database query statistics on Wordpress site? QUESTION: I've noticed that a few Wordpress blogs have query statistics present in their footer that simply state the number of queries and the total time required to process them for the particular page, reading something like: 23 queries. 0.448 seconds I was wondering how this is accomplished. Is it through the use of a particular Wordpress plug-in or perhaps from using some particular php function in the page's code? ANSWER: Try adding this to the bottom of the footer in your template: num_queries;?>.
[ "mysql", "performance", "wordpress", "layout", "statistics" ]
15
18
6,577
3
0
2008-08-18T16:33:58.870000
2008-08-18T17:40:51.717000
14,874
66,556
Drawing a view hierarchy into a specific context in Cocoa
For part of my application I have a need to create an image of a certain view and all of its subviews. To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews. I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level.
I found that writing the drawing code myself was the best way to: deal with potential transparency issues (some of the other options do add a white background to the whole image) performance was much better The code below is not perfect, because it does not deal with scaling issues when going from bounds to frames, but it does take into account the isFlipped state, and works very well for what I used it for. Note that it only draws the subviews (and the subsubviews,... recursively), but getting it to also draw itself is very easy, just add a [self drawRect:[self bounds]] in the implementation of imageWithSubviews. - (void)drawSubviews { BOOL flipped = [self isFlipped]; for ( NSView *subview in [self subviews] ) { // changes the coordinate system so that the local coordinates of the subview (bounds) become the coordinates of the superview (frame) // the transform assumes bounds and frame have the same size, and bounds origin is (0,0) // handling of 'isFlipped' also probably unreliable NSAffineTransform *transform = [NSAffineTransform transform]; if ( flipped ) { [transform translateXBy:subview.frame.origin.x yBy:NSMaxY(subview.frame)]; [transform scaleXBy:+1.0 yBy:-1.0]; } else [transform translateXBy:subview.frame.origin.x yBy:subview.frame.origin.y]; [transform concat]; // recursively draw the subview and sub-subviews [subview drawRect:[subview bounds]]; [subview drawSubviews]; // reset the transform to get back a clean graphic contexts for the rest of the drawing [transform invert]; [transform concat]; } } - (NSImage *)imageWithSubviews { NSImage *image = [[[NSImage alloc] initWithSize:[self bounds].size] autorelease]; [image lockFocus]; // it seems NSImage cannot use flipped coordinates the way NSView does (the method 'setFlipped:' does not seem to help) // Use instead an NSAffineTransform if ( [self isFlipped] ) { NSAffineTransform *transform = [NSAffineTransform transform]; [transform translateXBy:0 yBy:NSMaxY(self.bounds)]; [transform scaleXBy:+1.0 yBy:-1.0]; [transform concat]; } [self drawSubviews]; [image unlockFocus]; return image; }
Drawing a view hierarchy into a specific context in Cocoa For part of my application I have a need to create an image of a certain view and all of its subviews. To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews. I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level.
TITLE: Drawing a view hierarchy into a specific context in Cocoa QUESTION: For part of my application I have a need to create an image of a certain view and all of its subviews. To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews. I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level. ANSWER: I found that writing the drawing code myself was the best way to: deal with potential transparency issues (some of the other options do add a white background to the whole image) performance was much better The code below is not perfect, because it does not deal with scaling issues when going from bounds to frames, but it does take into account the isFlipped state, and works very well for what I used it for. Note that it only draws the subviews (and the subsubviews,... recursively), but getting it to also draw itself is very easy, just add a [self drawRect:[self bounds]] in the implementation of imageWithSubviews. - (void)drawSubviews { BOOL flipped = [self isFlipped]; for ( NSView *subview in [self subviews] ) { // changes the coordinate system so that the local coordinates of the subview (bounds) become the coordinates of the superview (frame) // the transform assumes bounds and frame have the same size, and bounds origin is (0,0) // handling of 'isFlipped' also probably unreliable NSAffineTransform *transform = [NSAffineTransform transform]; if ( flipped ) { [transform translateXBy:subview.frame.origin.x yBy:NSMaxY(subview.frame)]; [transform scaleXBy:+1.0 yBy:-1.0]; } else [transform translateXBy:subview.frame.origin.x yBy:subview.frame.origin.y]; [transform concat]; // recursively draw the subview and sub-subviews [subview drawRect:[subview bounds]]; [subview drawSubviews]; // reset the transform to get back a clean graphic contexts for the rest of the drawing [transform invert]; [transform concat]; } } - (NSImage *)imageWithSubviews { NSImage *image = [[[NSImage alloc] initWithSize:[self bounds].size] autorelease]; [image lockFocus]; // it seems NSImage cannot use flipped coordinates the way NSView does (the method 'setFlipped:' does not seem to help) // Use instead an NSAffineTransform if ( [self isFlipped] ) { NSAffineTransform *transform = [NSAffineTransform transform]; [transform translateXBy:0 yBy:NSMaxY(self.bounds)]; [transform scaleXBy:+1.0 yBy:-1.0]; [transform concat]; } [self drawSubviews]; [image unlockFocus]; return image; }
[ "objective-c", "cocoa", "macos" ]
4
2
3,178
4
0
2008-08-18T16:34:01.040000
2008-09-15T20:21:51.760000
14,884
14,899
Find the best combination from a given set of multiple sets
Say you have a shipment. It needs to go from point A to point B, point B to point C and finally point C to point D. You need it to get there in five days for the least amount of money possible. There are three possible shippers for each leg, each with their own different time and cost for each leg: Array ( [leg0] => Array ( [UPS] => Array ( [days] => 1 [cost] => 5000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 5 [cost] => 1000 ) ) [leg1] => Array ( [UPS] => Array ( [days] => 1 [cost] => 3000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 3 [cost] => 1000 ) ) [leg2] => Array ( [UPS] => Array ( [days] => 1 [cost] => 4000 ) [FedEx] => Array ( [days] => 1 [cost] => 3000 ) [Conway] => Array ( [days] => 2 [cost] => 5000 ) ) ) How would you go about finding the best combination programmatically? My best attempt so far (third or fourth algorithm) is: Find the longest shipper for each leg Eliminate the most "expensive" one Find the cheapest shipper for each leg Calculate the total cost & days If days are acceptable, finish, else, goto 1 Quickly mocked-up in PHP (note that the test array below works swimmingly, but if you try it with the test array from above, it does not find the correct combination): $shippers["leg1"] = array( "UPS" => array("days" => 1, "cost" => 4000), "Conway" => array("days" => 3, "cost" => 3200), "FedEx" => array("days" => 8, "cost" => 1000) ); $shippers["leg2"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $shippers["leg3"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $times = 0; $totalDays = 9999999; print " Shippers to Choose From: "; print_r($shippers); print " "; while($totalDays > $maxDays && $times < 500){ $totalDays = 0; $times++; $worstShipper = null; $longestShippers = null; $cheapestShippers = null; foreach($shippers as $legName => $leg){ //find longest shipment for each leg (in terms of days) unset($longestShippers[$legName]); $longestDays = null; if(count($leg) > 1){ foreach($leg as $shipperName => $shipper){ if(empty($longestDays) || $shipper["days"] > $longestDays){ $longestShippers[$legName]["days"] = $shipper["days"]; $longestShippers[$legName]["cost"] = $shipper["cost"]; $longestShippers[$legName]["name"] = $shipperName; $longestDays = $shipper["days"]; } } } } foreach($longestShippers as $leg => $shipper){ $shipper["totalCost"] = $shipper["days"] * $shipper["cost"]; //print $shipper["totalCost"]. " <?> ". $worstShipper["totalCost"]. ";"; if(empty($worstShipper) || $shipper["totalCost"] > $worstShipper["totalCost"]){ $worstShipper = $shipper; $worstShipperLeg = $leg; } } //print "worst shipper is: shippers[$worstShipperLeg][{$worstShipper['name']}]". $shippers[$worstShipperLeg][$worstShipper["name"]]["days"]; unset($shippers[$worstShipperLeg][$worstShipper["name"]]); print " Next: "; print_r($shippers); print " "; foreach($shippers as $legName => $leg){ //find cheapest shipment for each leg (in terms of cost) unset($cheapestShippers[$legName]); $lowestCost = null; foreach($leg as $shipperName => $shipper){ if(empty($lowestCost) || $shipper["cost"] < $lowestCost){ $cheapestShippers[$legName]["days"] = $shipper["days"]; $cheapestShippers[$legName]["cost"] = $shipper["cost"]; $cheapestShippers[$legName]["name"] = $shipperName; $lowestCost = $shipper["cost"]; } } //recalculate days and see if we are under max days... $totalDays += $cheapestShippers[$legName]['days']; } //print " totalDays: $totalDays "; } print " Chosen Shippers: "; print_r($cheapestShippers); print " "; I think I may have to actually do some sort of thing where I literally make each combination one by one (with a series of loops) and add up the total "score" of each, and find the best one.... EDIT: To clarify, this isn't a "homework" assignment (I'm not in school). It is part of my current project at work. The requirements (as always) have been constantly changing. If I were given the current constraints at the time I began working on this problem, I would be using some variant of the A* algorithm (or Dijkstra's or shortest path or simplex or something). But everything has been morphing and changing, and that brings me to where I'm at right now. So I guess that means I need to forget about all the crap I've done to this point and just go with what I know I should go with, which is a path finding algorithm.
Could alter some of the shortest path algorithms, like Dijkstra's, to weight each path by cost but also keep track of time and stop going along a certain path if the time exceeds your threshold. Should find the cheapest that gets you in under your threshold that way
Find the best combination from a given set of multiple sets Say you have a shipment. It needs to go from point A to point B, point B to point C and finally point C to point D. You need it to get there in five days for the least amount of money possible. There are three possible shippers for each leg, each with their own different time and cost for each leg: Array ( [leg0] => Array ( [UPS] => Array ( [days] => 1 [cost] => 5000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 5 [cost] => 1000 ) ) [leg1] => Array ( [UPS] => Array ( [days] => 1 [cost] => 3000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 3 [cost] => 1000 ) ) [leg2] => Array ( [UPS] => Array ( [days] => 1 [cost] => 4000 ) [FedEx] => Array ( [days] => 1 [cost] => 3000 ) [Conway] => Array ( [days] => 2 [cost] => 5000 ) ) ) How would you go about finding the best combination programmatically? My best attempt so far (third or fourth algorithm) is: Find the longest shipper for each leg Eliminate the most "expensive" one Find the cheapest shipper for each leg Calculate the total cost & days If days are acceptable, finish, else, goto 1 Quickly mocked-up in PHP (note that the test array below works swimmingly, but if you try it with the test array from above, it does not find the correct combination): $shippers["leg1"] = array( "UPS" => array("days" => 1, "cost" => 4000), "Conway" => array("days" => 3, "cost" => 3200), "FedEx" => array("days" => 8, "cost" => 1000) ); $shippers["leg2"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $shippers["leg3"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $times = 0; $totalDays = 9999999; print " Shippers to Choose From: "; print_r($shippers); print " "; while($totalDays > $maxDays && $times < 500){ $totalDays = 0; $times++; $worstShipper = null; $longestShippers = null; $cheapestShippers = null; foreach($shippers as $legName => $leg){ //find longest shipment for each leg (in terms of days) unset($longestShippers[$legName]); $longestDays = null; if(count($leg) > 1){ foreach($leg as $shipperName => $shipper){ if(empty($longestDays) || $shipper["days"] > $longestDays){ $longestShippers[$legName]["days"] = $shipper["days"]; $longestShippers[$legName]["cost"] = $shipper["cost"]; $longestShippers[$legName]["name"] = $shipperName; $longestDays = $shipper["days"]; } } } } foreach($longestShippers as $leg => $shipper){ $shipper["totalCost"] = $shipper["days"] * $shipper["cost"]; //print $shipper["totalCost"]. " <?> ". $worstShipper["totalCost"]. ";"; if(empty($worstShipper) || $shipper["totalCost"] > $worstShipper["totalCost"]){ $worstShipper = $shipper; $worstShipperLeg = $leg; } } //print "worst shipper is: shippers[$worstShipperLeg][{$worstShipper['name']}]". $shippers[$worstShipperLeg][$worstShipper["name"]]["days"]; unset($shippers[$worstShipperLeg][$worstShipper["name"]]); print " Next: "; print_r($shippers); print " "; foreach($shippers as $legName => $leg){ //find cheapest shipment for each leg (in terms of cost) unset($cheapestShippers[$legName]); $lowestCost = null; foreach($leg as $shipperName => $shipper){ if(empty($lowestCost) || $shipper["cost"] < $lowestCost){ $cheapestShippers[$legName]["days"] = $shipper["days"]; $cheapestShippers[$legName]["cost"] = $shipper["cost"]; $cheapestShippers[$legName]["name"] = $shipperName; $lowestCost = $shipper["cost"]; } } //recalculate days and see if we are under max days... $totalDays += $cheapestShippers[$legName]['days']; } //print " totalDays: $totalDays "; } print " Chosen Shippers: "; print_r($cheapestShippers); print " "; I think I may have to actually do some sort of thing where I literally make each combination one by one (with a series of loops) and add up the total "score" of each, and find the best one.... EDIT: To clarify, this isn't a "homework" assignment (I'm not in school). It is part of my current project at work. The requirements (as always) have been constantly changing. If I were given the current constraints at the time I began working on this problem, I would be using some variant of the A* algorithm (or Dijkstra's or shortest path or simplex or something). But everything has been morphing and changing, and that brings me to where I'm at right now. So I guess that means I need to forget about all the crap I've done to this point and just go with what I know I should go with, which is a path finding algorithm.
TITLE: Find the best combination from a given set of multiple sets QUESTION: Say you have a shipment. It needs to go from point A to point B, point B to point C and finally point C to point D. You need it to get there in five days for the least amount of money possible. There are three possible shippers for each leg, each with their own different time and cost for each leg: Array ( [leg0] => Array ( [UPS] => Array ( [days] => 1 [cost] => 5000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 5 [cost] => 1000 ) ) [leg1] => Array ( [UPS] => Array ( [days] => 1 [cost] => 3000 ) [FedEx] => Array ( [days] => 2 [cost] => 3000 ) [Conway] => Array ( [days] => 3 [cost] => 1000 ) ) [leg2] => Array ( [UPS] => Array ( [days] => 1 [cost] => 4000 ) [FedEx] => Array ( [days] => 1 [cost] => 3000 ) [Conway] => Array ( [days] => 2 [cost] => 5000 ) ) ) How would you go about finding the best combination programmatically? My best attempt so far (third or fourth algorithm) is: Find the longest shipper for each leg Eliminate the most "expensive" one Find the cheapest shipper for each leg Calculate the total cost & days If days are acceptable, finish, else, goto 1 Quickly mocked-up in PHP (note that the test array below works swimmingly, but if you try it with the test array from above, it does not find the correct combination): $shippers["leg1"] = array( "UPS" => array("days" => 1, "cost" => 4000), "Conway" => array("days" => 3, "cost" => 3200), "FedEx" => array("days" => 8, "cost" => 1000) ); $shippers["leg2"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $shippers["leg3"] = array( "UPS" => array("days" => 1, "cost" => 3500), "Conway" => array("days" => 2, "cost" => 2800), "FedEx" => array("days" => 4, "cost" => 900) ); $times = 0; $totalDays = 9999999; print " Shippers to Choose From: "; print_r($shippers); print " "; while($totalDays > $maxDays && $times < 500){ $totalDays = 0; $times++; $worstShipper = null; $longestShippers = null; $cheapestShippers = null; foreach($shippers as $legName => $leg){ //find longest shipment for each leg (in terms of days) unset($longestShippers[$legName]); $longestDays = null; if(count($leg) > 1){ foreach($leg as $shipperName => $shipper){ if(empty($longestDays) || $shipper["days"] > $longestDays){ $longestShippers[$legName]["days"] = $shipper["days"]; $longestShippers[$legName]["cost"] = $shipper["cost"]; $longestShippers[$legName]["name"] = $shipperName; $longestDays = $shipper["days"]; } } } } foreach($longestShippers as $leg => $shipper){ $shipper["totalCost"] = $shipper["days"] * $shipper["cost"]; //print $shipper["totalCost"]. " <?> ". $worstShipper["totalCost"]. ";"; if(empty($worstShipper) || $shipper["totalCost"] > $worstShipper["totalCost"]){ $worstShipper = $shipper; $worstShipperLeg = $leg; } } //print "worst shipper is: shippers[$worstShipperLeg][{$worstShipper['name']}]". $shippers[$worstShipperLeg][$worstShipper["name"]]["days"]; unset($shippers[$worstShipperLeg][$worstShipper["name"]]); print " Next: "; print_r($shippers); print " "; foreach($shippers as $legName => $leg){ //find cheapest shipment for each leg (in terms of cost) unset($cheapestShippers[$legName]); $lowestCost = null; foreach($leg as $shipperName => $shipper){ if(empty($lowestCost) || $shipper["cost"] < $lowestCost){ $cheapestShippers[$legName]["days"] = $shipper["days"]; $cheapestShippers[$legName]["cost"] = $shipper["cost"]; $cheapestShippers[$legName]["name"] = $shipperName; $lowestCost = $shipper["cost"]; } } //recalculate days and see if we are under max days... $totalDays += $cheapestShippers[$legName]['days']; } //print " totalDays: $totalDays "; } print " Chosen Shippers: "; print_r($cheapestShippers); print " "; I think I may have to actually do some sort of thing where I literally make each combination one by one (with a series of loops) and add up the total "score" of each, and find the best one.... EDIT: To clarify, this isn't a "homework" assignment (I'm not in school). It is part of my current project at work. The requirements (as always) have been constantly changing. If I were given the current constraints at the time I began working on this problem, I would be using some variant of the A* algorithm (or Dijkstra's or shortest path or simplex or something). But everything has been morphing and changing, and that brings me to where I'm at right now. So I guess that means I need to forget about all the crap I've done to this point and just go with what I know I should go with, which is a path finding algorithm. ANSWER: Could alter some of the shortest path algorithms, like Dijkstra's, to weight each path by cost but also keep track of time and stop going along a certain path if the time exceeds your threshold. Should find the cheapest that gets you in under your threshold that way
[ "php", "algorithm", "puzzle", "combinations", "np-complete" ]
10
8
2,386
7
0
2008-08-18T16:39:23.557000
2008-08-18T16:52:36.930000
14,893
15,011
Improving Your Build Process
Or, actually establishing a build process when there isn't much of one in place to begin with. Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever). What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta. Relevant Tools: Visual Build Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.) Tentatively, I've got a Visual Build project that does this: Get source and place in local directory, including necessary DLLs needed for project. Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use). Build using Visual Studio Precompile using command line, copying into what will be a "build" directory Copy to destination. Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps. I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet. Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming. First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control. Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is. Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc... Hope this helps.
Improving Your Build Process Or, actually establishing a build process when there isn't much of one in place to begin with. Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever). What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta. Relevant Tools: Visual Build Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.) Tentatively, I've got a Visual Build project that does this: Get source and place in local directory, including necessary DLLs needed for project. Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use). Build using Visual Studio Precompile using command line, copying into what will be a "build" directory Copy to destination. Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps. I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet. Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
TITLE: Improving Your Build Process QUESTION: Or, actually establishing a build process when there isn't much of one in place to begin with. Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever). What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta. Relevant Tools: Visual Build Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.) Tentatively, I've got a Visual Build project that does this: Get source and place in local directory, including necessary DLLs needed for project. Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use). Build using Visual Studio Precompile using command line, copying into what will be a "build" directory Copy to destination. Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps. I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet. Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping). ANSWER: When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming. First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control. Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is. Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc... Hope this helps.
[ "build-process", "build-automation", "build" ]
27
18
7,169
8
0
2008-08-18T16:48:50.293000
2008-08-18T18:07:27.763000
14,909
14,921
Tree-Based (vs. HTML-Based) Web Framework?
Anyone who writes client-side JavaScript is familiar with the DOM - the tree structure that your browser references in memory, generated from the HTML it got from the server. JavaScript can add, remove and modify nodes on the DOM tree to make changes to the page. I find it very nice to work with (browser bugs aside), and very different from the way my server-side code has to generate the page in the first place. My question is: what server-side frameworks/languages build a page by treating it as a DOM tree from the beginning - inserting nodes instead of echoing strings? I think it would be very helpful if the client-side and server-side code both saw the page the same way. You could certainly hack something like this together in any web server language, but a framework dedicated to creating a page this way could make some very nice optimizations. Open source, being widely deployed and having been around a while would all be pluses.
You're describing Rhino on Rails, which is not out but will be soon. Similarly, Aptana Jaxer, however RnR will include an actual framework (Rails) whereas Jaxer is just the server technology.
Tree-Based (vs. HTML-Based) Web Framework? Anyone who writes client-side JavaScript is familiar with the DOM - the tree structure that your browser references in memory, generated from the HTML it got from the server. JavaScript can add, remove and modify nodes on the DOM tree to make changes to the page. I find it very nice to work with (browser bugs aside), and very different from the way my server-side code has to generate the page in the first place. My question is: what server-side frameworks/languages build a page by treating it as a DOM tree from the beginning - inserting nodes instead of echoing strings? I think it would be very helpful if the client-side and server-side code both saw the page the same way. You could certainly hack something like this together in any web server language, but a framework dedicated to creating a page this way could make some very nice optimizations. Open source, being widely deployed and having been around a while would all be pluses.
TITLE: Tree-Based (vs. HTML-Based) Web Framework? QUESTION: Anyone who writes client-side JavaScript is familiar with the DOM - the tree structure that your browser references in memory, generated from the HTML it got from the server. JavaScript can add, remove and modify nodes on the DOM tree to make changes to the page. I find it very nice to work with (browser bugs aside), and very different from the way my server-side code has to generate the page in the first place. My question is: what server-side frameworks/languages build a page by treating it as a DOM tree from the beginning - inserting nodes instead of echoing strings? I think it would be very helpful if the client-side and server-side code both saw the page the same way. You could certainly hack something like this together in any web server language, but a framework dedicated to creating a page this way could make some very nice optimizations. Open source, being widely deployed and having been around a while would all be pluses. ANSWER: You're describing Rhino on Rails, which is not out but will be soon. Similarly, Aptana Jaxer, however RnR will include an actual framework (Rails) whereas Jaxer is just the server technology.
[ "javascript", "html", "dom" ]
5
2
412
4
0
2008-08-18T16:58:12.393000
2008-08-18T17:04:54.377000
14,911
114,334
PDF generation from XHTML in a LAMP environment
Can anyone recommend a good server-side PDF generation tool that would work in a Linux environment. I want easy as possible, pass it a XHTML file (with images) and have it generate a PDF from the rendered source. I don't have a massive budget, but anything under $1000 should be alright. Andrew
I used HTMLDoc about 8 years ago and it did a good job of turning HTML tables with some basic formatting into a decent PDF report. There also seems to be an open source version as well.
PDF generation from XHTML in a LAMP environment Can anyone recommend a good server-side PDF generation tool that would work in a Linux environment. I want easy as possible, pass it a XHTML file (with images) and have it generate a PDF from the rendered source. I don't have a massive budget, but anything under $1000 should be alright. Andrew
TITLE: PDF generation from XHTML in a LAMP environment QUESTION: Can anyone recommend a good server-side PDF generation tool that would work in a Linux environment. I want easy as possible, pass it a XHTML file (with images) and have it generate a PDF from the rendered source. I don't have a massive budget, but anything under $1000 should be alright. Andrew ANSWER: I used HTMLDoc about 8 years ago and it did a good job of turning HTML tables with some basic formatting into a decent PDF report. There also seems to be an open source version as well.
[ "linux", "pdf", "pdf-generation" ]
5
1
2,630
8
0
2008-08-18T16:58:50.007000
2008-09-22T11:30:46.010000
14,923
36,833
Can I generate ASP.NET MVC routes from a Sitemap?
I'm thinking of learning the ASP.NET MVC framework for an upcoming project. Can I use the advanced routing to create long URLs based on the sitemap hierarchy? Example navigation path: Home > Shop > Products > Household > Kitchen > Cookware > Cooksets > Nonstick Typical (I think) MVC URL: http://example.com/products/category/NonstickCooksets Desired URL: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick Can I do this?
Zack, if I understand right you want unlimited depth of the subcategories. No biggie, since MVC Preview 3 (I think 3 or 4) this has been solved. Just define a route like "{controller}/{action}/{*categoryPath}" for an url such as: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick you should have a ShopController with a Products action: public class ShopController: Controller {... public ActionResult Products(string categoryPath) { // the categoryPath value would be // "household/kitchen/cookware/cooksets/nonstick". Process it (for ex. split it) // and then decide what you do.. return View(); }
Can I generate ASP.NET MVC routes from a Sitemap? I'm thinking of learning the ASP.NET MVC framework for an upcoming project. Can I use the advanced routing to create long URLs based on the sitemap hierarchy? Example navigation path: Home > Shop > Products > Household > Kitchen > Cookware > Cooksets > Nonstick Typical (I think) MVC URL: http://example.com/products/category/NonstickCooksets Desired URL: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick Can I do this?
TITLE: Can I generate ASP.NET MVC routes from a Sitemap? QUESTION: I'm thinking of learning the ASP.NET MVC framework for an upcoming project. Can I use the advanced routing to create long URLs based on the sitemap hierarchy? Example navigation path: Home > Shop > Products > Household > Kitchen > Cookware > Cooksets > Nonstick Typical (I think) MVC URL: http://example.com/products/category/NonstickCooksets Desired URL: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick Can I do this? ANSWER: Zack, if I understand right you want unlimited depth of the subcategories. No biggie, since MVC Preview 3 (I think 3 or 4) this has been solved. Just define a route like "{controller}/{action}/{*categoryPath}" for an url such as: http://example.com/shop/products/household/kitchen/cookware/cooksets/nonstick you should have a ShopController with a Products action: public class ShopController: Controller {... public ActionResult Products(string categoryPath) { // the categoryPath value would be // "household/kitchen/cookware/cooksets/nonstick". Process it (for ex. split it) // and then decide what you do.. return View(); }
[ "asp.net", "asp.net-mvc", "url", "routes", "sitemap" ]
4
10
2,054
2
0
2008-08-18T17:05:40.400000
2008-08-31T12:38:24.453000
14,934
14,946
Parameter Binding: What happens under the hood?
.NET, Java and other high level database API's in various language often provide techniques known as prepared statements and parameter binding as opposed to sending plain text commands to the Database server. What I would like to know is what happens when you execute a statement like this: SqlCommand cmd = new SqlCommand("GetMemberByID"); cmd.CommandType = CommandType.StoredProcedure; SqlParameter param = new SqlParameter("@ID", memberID); para.DbType = DbType.Integer; cmd.Parameters.Add(param); I know this is a best practice. SQL injection attacks are minimized this way. But what exactly happens under the hood when you execute these statements? Is the end result still a SQL safe string? If not, what is the end result? And is this enough to prevent SQL injection attacks?
The MySQL manual page on prepared statements provides lots of information (which should apply to any other RDBMS). Basically, your statement is parsed and processed ahead of time, and the parameters are sent separately instead of being handled along with the SQL code. This eliminates SQL-injection attacks because the SQL is parsed before the parameters are even set.
Parameter Binding: What happens under the hood? .NET, Java and other high level database API's in various language often provide techniques known as prepared statements and parameter binding as opposed to sending plain text commands to the Database server. What I would like to know is what happens when you execute a statement like this: SqlCommand cmd = new SqlCommand("GetMemberByID"); cmd.CommandType = CommandType.StoredProcedure; SqlParameter param = new SqlParameter("@ID", memberID); para.DbType = DbType.Integer; cmd.Parameters.Add(param); I know this is a best practice. SQL injection attacks are minimized this way. But what exactly happens under the hood when you execute these statements? Is the end result still a SQL safe string? If not, what is the end result? And is this enough to prevent SQL injection attacks?
TITLE: Parameter Binding: What happens under the hood? QUESTION: .NET, Java and other high level database API's in various language often provide techniques known as prepared statements and parameter binding as opposed to sending plain text commands to the Database server. What I would like to know is what happens when you execute a statement like this: SqlCommand cmd = new SqlCommand("GetMemberByID"); cmd.CommandType = CommandType.StoredProcedure; SqlParameter param = new SqlParameter("@ID", memberID); para.DbType = DbType.Integer; cmd.Parameters.Add(param); I know this is a best practice. SQL injection attacks are minimized this way. But what exactly happens under the hood when you execute these statements? Is the end result still a SQL safe string? If not, what is the end result? And is this enough to prevent SQL injection attacks? ANSWER: The MySQL manual page on prepared statements provides lots of information (which should apply to any other RDBMS). Basically, your statement is parsed and processed ahead of time, and the parameters are sent separately instead of being handled along with the SQL code. This eliminates SQL-injection attacks because the SQL is parsed before the parameters are even set.
[ "c#", ".net", "sql", "database", "api" ]
8
7
3,189
3
0
2008-08-18T17:11:15.083000
2008-08-18T17:18:50.310000
14,943
14,960
How to Disable Alt + F4 closing form?
What is the best way to disable Alt + F4 in a c# win form to prevent the user from closing the form? I am using a form as a popup dialog to display a progress bar and I do not want the user to be able to close it.
This does the job: private void Form1_FormClosing(object sender, FormClosingEventArgs e) { e.Cancel = true; } Edit: In response to pix0rs concern - yes you are correct that you will not be able to programatically close the app. However, you can simply remove the event handler for the form_closing event before closing the form: this.FormClosing -= new System.Windows.Forms.FormClosingEventHandler(this.Form1_FormClosing); this.Close();
How to Disable Alt + F4 closing form? What is the best way to disable Alt + F4 in a c# win form to prevent the user from closing the form? I am using a form as a popup dialog to display a progress bar and I do not want the user to be able to close it.
TITLE: How to Disable Alt + F4 closing form? QUESTION: What is the best way to disable Alt + F4 in a c# win form to prevent the user from closing the form? I am using a form as a popup dialog to display a progress bar and I do not want the user to be able to close it. ANSWER: This does the job: private void Form1_FormClosing(object sender, FormClosingEventArgs e) { e.Cancel = true; } Edit: In response to pix0rs concern - yes you are correct that you will not be able to programatically close the app. However, you can simply remove the event handler for the form_closing event before closing the form: this.FormClosing -= new System.Windows.Forms.FormClosingEventHandler(this.Form1_FormClosing); this.Close();
[ "c#", ".net", "winforms" ]
83
110
84,834
11
0
2008-08-18T17:16:52.577000
2008-08-18T17:27:38.343000
14,963
15,098
.NET 3.5 Service Pack 1 causes 404 pages on ASP.NET Web App
I have a problem with IIS 6.0 ceasing to work for an ASP.NET application after installing Service Pack 1 for.NET 3.5. I have 2 identical virtual dedicated servers. Installing SP1 on the first had no adverse effect. Installing it on the second caused ASP.NET pages to start returning 404 page not found. Static.html pages working okay on both servers. Has anybody else experienced this?
This is broad problem, so let's start by asking some troubleshooting questions: Based on your description, the ASP.NET runtime is not catching your request and processing the aspx files. You may need to register the asp.net pipeline with IIS again using ASPNET_REGIIS -i. Have you made sure that the app_offline.htm file has been removed from the directory of the application? I have had this happen before after an update. Have you setup fiddler for instance to follow the request to see what is exactly being requested? Make sure ASP.NET is enabled in the IIS Administration Console under "Web Service Extensions." Make sure everything is set to allowed for your different versions of the framework. Well, let's start with those and hopefully we can guide you to the problem.
.NET 3.5 Service Pack 1 causes 404 pages on ASP.NET Web App I have a problem with IIS 6.0 ceasing to work for an ASP.NET application after installing Service Pack 1 for.NET 3.5. I have 2 identical virtual dedicated servers. Installing SP1 on the first had no adverse effect. Installing it on the second caused ASP.NET pages to start returning 404 page not found. Static.html pages working okay on both servers. Has anybody else experienced this?
TITLE: .NET 3.5 Service Pack 1 causes 404 pages on ASP.NET Web App QUESTION: I have a problem with IIS 6.0 ceasing to work for an ASP.NET application after installing Service Pack 1 for.NET 3.5. I have 2 identical virtual dedicated servers. Installing SP1 on the first had no adverse effect. Installing it on the second caused ASP.NET pages to start returning 404 page not found. Static.html pages working okay on both servers. Has anybody else experienced this? ANSWER: This is broad problem, so let's start by asking some troubleshooting questions: Based on your description, the ASP.NET runtime is not catching your request and processing the aspx files. You may need to register the asp.net pipeline with IIS again using ASPNET_REGIIS -i. Have you made sure that the app_offline.htm file has been removed from the directory of the application? I have had this happen before after an update. Have you setup fiddler for instance to follow the request to see what is exactly being requested? Make sure ASP.NET is enabled in the IIS Administration Console under "Web Service Extensions." Make sure everything is set to allowed for your different versions of the framework. Well, let's start with those and hopefully we can guide you to the problem.
[ "iis", ".net-3.5", "servicepacks" ]
5
4
3,184
6
0
2008-08-18T17:30:18.553000
2008-08-18T19:14:46.223000
15,007
15,037
Executing JavaScript to Render HTML for Server-Side Caching
There are lots of widgets provided by sites that are effectively bits of JavaScript that generate HTML through DOM manipulation or document.write(). Rather than slow the browser down even more with additional requests and trust yet another provider to be fast, reliable and not change the widget output, I want to execute* the JavaScript to generate the rendered HTML, and then save that HTML source.­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ Things I've looked into that seem unworkable or way too difficult: The Links Browser ( not lynx! ) Headless use of Xvfb plus Firefox plus Greasemonkey ( yikes ) The all-Java browser toolkit Cobra ( the best bet! ) Any ideas? ** Obviously you can't really execute the JavaScript completely, as it doesn't necessarily have an exit path, but you get the idea.
Wikipedia's "Server-side JavaScript" article lists numerous implementations, many of which are based on Mozilla's Rhino JavaScript-to-Java converter, or its cousin SpiderMonkey (the same engine as found in Firefox and other Gecko-based browsers). In particular, something simple like mod_js for Apache may suit your needs.
Executing JavaScript to Render HTML for Server-Side Caching There are lots of widgets provided by sites that are effectively bits of JavaScript that generate HTML through DOM manipulation or document.write(). Rather than slow the browser down even more with additional requests and trust yet another provider to be fast, reliable and not change the widget output, I want to execute* the JavaScript to generate the rendered HTML, and then save that HTML source.­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ Things I've looked into that seem unworkable or way too difficult: The Links Browser ( not lynx! ) Headless use of Xvfb plus Firefox plus Greasemonkey ( yikes ) The all-Java browser toolkit Cobra ( the best bet! ) Any ideas? ** Obviously you can't really execute the JavaScript completely, as it doesn't necessarily have an exit path, but you get the idea.
TITLE: Executing JavaScript to Render HTML for Server-Side Caching QUESTION: There are lots of widgets provided by sites that are effectively bits of JavaScript that generate HTML through DOM manipulation or document.write(). Rather than slow the browser down even more with additional requests and trust yet another provider to be fast, reliable and not change the widget output, I want to execute* the JavaScript to generate the rendered HTML, and then save that HTML source.­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ Things I've looked into that seem unworkable or way too difficult: The Links Browser ( not lynx! ) Headless use of Xvfb plus Firefox plus Greasemonkey ( yikes ) The all-Java browser toolkit Cobra ( the best bet! ) Any ideas? ** Obviously you can't really execute the JavaScript completely, as it doesn't necessarily have an exit path, but you get the idea. ANSWER: Wikipedia's "Server-side JavaScript" article lists numerous implementations, many of which are based on Mozilla's Rhino JavaScript-to-Java converter, or its cousin SpiderMonkey (the same engine as found in Firefox and other Gecko-based browsers). In particular, something simple like mod_js for Apache may suit your needs.
[ "javascript", "html", "rendering", "greasemonkey" ]
15
4
9,376
3
0
2008-08-18T18:01:33.623000
2008-08-18T18:26:10.300000
15,015
15,069
Web Application Testing for .Net (WatiN Test Recorder)
I've been using WatiN as a testing tool for my current project. Besides the minor bugs with the Test Recorder, I've been able to use it and automate a lot of my tests in conjunction with NUnit. Anyone else out there with experience with different tools they might suggest?
I have used: WatiN AutomatedQA TestComplete All of them have had their purpose and are very good tools.
Web Application Testing for .Net (WatiN Test Recorder) I've been using WatiN as a testing tool for my current project. Besides the minor bugs with the Test Recorder, I've been able to use it and automate a lot of my tests in conjunction with NUnit. Anyone else out there with experience with different tools they might suggest?
TITLE: Web Application Testing for .Net (WatiN Test Recorder) QUESTION: I've been using WatiN as a testing tool for my current project. Besides the minor bugs with the Test Recorder, I've been able to use it and automate a lot of my tests in conjunction with NUnit. Anyone else out there with experience with different tools they might suggest? ANSWER: I have used: WatiN AutomatedQA TestComplete All of them have had their purpose and are very good tools.
[ ".net", "automated-tests", "watin" ]
8
1
7,210
6
0
2008-08-18T18:12:09.187000
2008-08-18T18:43:14.943000
15,024
15,035
Tools to help a small shop score higher on the "Joel Test"
Questions #1 through #4 on the Joel Test in my opinion are all about the development tools being used and the support system in place for developers: Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? I'm just curious what free/cheap (but good) tools exist for the small development shops that don't have large bank accounts to use to achieve a positive answer on these questions. For source control I know Subversion is a great solution, and if you are a one man shop you could even use SourceGear's Vault. I use NAnt for my larger projects, but have yet to set up a script to build my installers as well as running the obfusication tools all as a single step. Any other suggestions? If you can answer yes to the building in a single step, I think creating daily builds would be easy, but what tools would you recommend for automating those daily builds? For a one or two man team, it's already been discussed on SO that you can use FogBugz On Demand, but what other bug tracking solutions exist for small teams?
source control: Subversion or Mercurial or Git build automation: NAnt, MSBuild, Rake, Maven continuous integration: CruiseControl.NET or Continuum or Jenkins issue tracking: Trac, Bugzilla, Gemini (if it must be.NET and free-ish) Don't forget automated testing with NUnit, Fit, and WatiN.
Tools to help a small shop score higher on the "Joel Test" Questions #1 through #4 on the Joel Test in my opinion are all about the development tools being used and the support system in place for developers: Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? I'm just curious what free/cheap (but good) tools exist for the small development shops that don't have large bank accounts to use to achieve a positive answer on these questions. For source control I know Subversion is a great solution, and if you are a one man shop you could even use SourceGear's Vault. I use NAnt for my larger projects, but have yet to set up a script to build my installers as well as running the obfusication tools all as a single step. Any other suggestions? If you can answer yes to the building in a single step, I think creating daily builds would be easy, but what tools would you recommend for automating those daily builds? For a one or two man team, it's already been discussed on SO that you can use FogBugz On Demand, but what other bug tracking solutions exist for small teams?
TITLE: Tools to help a small shop score higher on the "Joel Test" QUESTION: Questions #1 through #4 on the Joel Test in my opinion are all about the development tools being used and the support system in place for developers: Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? I'm just curious what free/cheap (but good) tools exist for the small development shops that don't have large bank accounts to use to achieve a positive answer on these questions. For source control I know Subversion is a great solution, and if you are a one man shop you could even use SourceGear's Vault. I use NAnt for my larger projects, but have yet to set up a script to build my installers as well as running the obfusication tools all as a single step. Any other suggestions? If you can answer yes to the building in a single step, I think creating daily builds would be easy, but what tools would you recommend for automating those daily builds? For a one or two man team, it's already been discussed on SO that you can use FogBugz On Demand, but what other bug tracking solutions exist for small teams? ANSWER: source control: Subversion or Mercurial or Git build automation: NAnt, MSBuild, Rake, Maven continuous integration: CruiseControl.NET or Continuum or Jenkins issue tracking: Trac, Bugzilla, Gemini (if it must be.NET and free-ish) Don't forget automated testing with NUnit, Fit, and WatiN.
[ "version-control", "bug-tracking", "dailybuilds" ]
17
19
1,708
14
0
2008-08-18T18:17:49.787000
2008-08-18T18:25:23.437000
15,034
15,055
Visual Studio Error: The "GenerateResource" task failed unexpectedly
When building a VS 2008 solution with 19 projects I sometimes get: The "GenerateResource" task failed unexpectedly. System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.IO.MemoryStream.set_Capacity(Int32 value) at System.IO.MemoryStream.EnsureCapacity(Int32 value) at System.IO.MemoryStream.WriteByte(Byte value) at System.IO.BinaryWriter.Write(Byte value) at System.Resources.ResourceWriter.Write7BitEncodedInt(BinaryWriter store, Int32 value) at System.Resources.ResourceWriter.Generate() at System.Resources.ResourceWriter.Dispose(Boolean disposing) at System.Resources.ResourceWriter.Close() at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(IResourceWriter writer) at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(String filename) at Microsoft.Build.Tasks.ProcessResourceFiles.ProcessFile(String inFile, String outFile) at Microsoft.Build.Tasks.ProcessResourceFiles.Run(TaskLoggingHelper log, ITaskItem[] assemblyFilesList, ArrayList inputs, ArrayList outputs, Boolean sourcePath, String language, String namespacename, String resourcesNamespace, String filename, String classname, Boolean publicClass) at Microsoft.Build.Tasks.GenerateResource.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) C:\Windows\Microsoft.NET\Framework\v3.5 Usually happens after VS has been running for about 4 hours; the only way to get VS to compile properly is to close out VS, and start it again. I'm on a machine with 3GB Ram. TaskManager shows the devenv.exe working set to be 578060K, and the entire memory allocation for the machine is 1.78GB. It should have more than enough ram to generate the resources.
I used to hit this now and again with larger solutions. My tactic was to break the larger solution down into smaller solutions. You could also try: http://stevenharman.net/blog/archive/2008/04/29/hacking-visual-studio-to-use-more-than-2gigabytes-of-memory.aspx
Visual Studio Error: The "GenerateResource" task failed unexpectedly When building a VS 2008 solution with 19 projects I sometimes get: The "GenerateResource" task failed unexpectedly. System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.IO.MemoryStream.set_Capacity(Int32 value) at System.IO.MemoryStream.EnsureCapacity(Int32 value) at System.IO.MemoryStream.WriteByte(Byte value) at System.IO.BinaryWriter.Write(Byte value) at System.Resources.ResourceWriter.Write7BitEncodedInt(BinaryWriter store, Int32 value) at System.Resources.ResourceWriter.Generate() at System.Resources.ResourceWriter.Dispose(Boolean disposing) at System.Resources.ResourceWriter.Close() at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(IResourceWriter writer) at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(String filename) at Microsoft.Build.Tasks.ProcessResourceFiles.ProcessFile(String inFile, String outFile) at Microsoft.Build.Tasks.ProcessResourceFiles.Run(TaskLoggingHelper log, ITaskItem[] assemblyFilesList, ArrayList inputs, ArrayList outputs, Boolean sourcePath, String language, String namespacename, String resourcesNamespace, String filename, String classname, Boolean publicClass) at Microsoft.Build.Tasks.GenerateResource.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) C:\Windows\Microsoft.NET\Framework\v3.5 Usually happens after VS has been running for about 4 hours; the only way to get VS to compile properly is to close out VS, and start it again. I'm on a machine with 3GB Ram. TaskManager shows the devenv.exe working set to be 578060K, and the entire memory allocation for the machine is 1.78GB. It should have more than enough ram to generate the resources.
TITLE: Visual Studio Error: The "GenerateResource" task failed unexpectedly QUESTION: When building a VS 2008 solution with 19 projects I sometimes get: The "GenerateResource" task failed unexpectedly. System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.IO.MemoryStream.set_Capacity(Int32 value) at System.IO.MemoryStream.EnsureCapacity(Int32 value) at System.IO.MemoryStream.WriteByte(Byte value) at System.IO.BinaryWriter.Write(Byte value) at System.Resources.ResourceWriter.Write7BitEncodedInt(BinaryWriter store, Int32 value) at System.Resources.ResourceWriter.Generate() at System.Resources.ResourceWriter.Dispose(Boolean disposing) at System.Resources.ResourceWriter.Close() at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(IResourceWriter writer) at Microsoft.Build.Tasks.ProcessResourceFiles.WriteResources(String filename) at Microsoft.Build.Tasks.ProcessResourceFiles.ProcessFile(String inFile, String outFile) at Microsoft.Build.Tasks.ProcessResourceFiles.Run(TaskLoggingHelper log, ITaskItem[] assemblyFilesList, ArrayList inputs, ArrayList outputs, Boolean sourcePath, String language, String namespacename, String resourcesNamespace, String filename, String classname, Boolean publicClass) at Microsoft.Build.Tasks.GenerateResource.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) C:\Windows\Microsoft.NET\Framework\v3.5 Usually happens after VS has been running for about 4 hours; the only way to get VS to compile properly is to close out VS, and start it again. I'm on a machine with 3GB Ram. TaskManager shows the devenv.exe working set to be 578060K, and the entire memory allocation for the machine is 1.78GB. It should have more than enough ram to generate the resources. ANSWER: I used to hit this now and again with larger solutions. My tactic was to break the larger solution down into smaller solutions. You could also try: http://stevenharman.net/blog/archive/2008/04/29/hacking-visual-studio-to-use-more-than-2gigabytes-of-memory.aspx
[ "visual-studio", "out-of-memory" ]
7
0
18,908
8
0
2008-08-18T18:25:13.810000
2008-08-18T18:36:45.270000
15,040
15,683
How do you set your LAMP testing server?
I am using xampp on Windows, but I would like to use something closer to my server setup. Federico Cargnelutti tutorial explains how to setup LAMP VMWARE appliance; it is a great introduction to VMware appliances, but one of the commands was not working and it doesn't describe how to change the keyboard layout and the timezone. ps: the commands are easy to find but I don't want to look for them each time I reinstall the server. I am using this question as a reminder.
This is my install scrpt, I use it on debian servers, but it will work in Ubuntu (Ubuntu is built on Debian) apt-get -yq update apt-get -yq upgrade apt-get -yq install sudo apt-get -yq install gcc apt-get -yq install g++ apt-get -yq install make apt-get -yq install apache2 apt-get -yq install php5 apt-get -yq install php5-curl apt-get -yq install php5-mysql apt-get -yq install php5-gd apt-get -yq install mysql-common apt-get -yq install mysql-client apt-get -yq install mysql-server apt-get -yq install phpmyadmin apt-get -yq install samba echo '[global] workgroup = workgroup server string = %h server dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes;invalid users = root unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\sUNIX\spassword:* %n\n *Retype\snew\sUNIX\spassword:* %n\n *password\supdated\ssuccessfully*. socket options = TCP_NODELAY [homes] comment = Home Directories browseable = no writable = no create mask = 0700 directory mask = 0700 valid users = %S [www] comment = WWW writable = yes locking = no path = /var/www public = yes' > /etc/samba/smb.conf (echo SAMBAPASSWORD; echo SAMBAPASSWORD) | smbpasswd -sa root echo 'NameVirtualHost * ServerAdmin webmaster@localhost DocumentRoot /var/www/ Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On ' > /etc/apache2/sites-enabled/000-default /etc/init.d/apache2 stop /etc/init.d/samba stop /etc/init.d/apache2 start /etc/init.d/samba start edit: add this to set your MySQL password /etc/init.d/mysql stop echo "UPDATE mysql.user SET Password=PASSWORD('MySQLPasswrod') WHERE User='root'; FLUSH PRIVILEGES;" > /root/MySQLPassword mysqld_safe --init-file=/root/MySQLPassword & sleep 1 /etc/init.d/mysql stop sleep 1 /etc/init.d/mysql start end edit This is a bit specailised but you get the idea, if you save this to a file ('install' for example) all you have to do is: chmod +x install./install Some of my apt-get commands are not necessary, because apt will automatically get the dependencies but I prefer to be specific, for my installs.
How do you set your LAMP testing server? I am using xampp on Windows, but I would like to use something closer to my server setup. Federico Cargnelutti tutorial explains how to setup LAMP VMWARE appliance; it is a great introduction to VMware appliances, but one of the commands was not working and it doesn't describe how to change the keyboard layout and the timezone. ps: the commands are easy to find but I don't want to look for them each time I reinstall the server. I am using this question as a reminder.
TITLE: How do you set your LAMP testing server? QUESTION: I am using xampp on Windows, but I would like to use something closer to my server setup. Federico Cargnelutti tutorial explains how to setup LAMP VMWARE appliance; it is a great introduction to VMware appliances, but one of the commands was not working and it doesn't describe how to change the keyboard layout and the timezone. ps: the commands are easy to find but I don't want to look for them each time I reinstall the server. I am using this question as a reminder. ANSWER: This is my install scrpt, I use it on debian servers, but it will work in Ubuntu (Ubuntu is built on Debian) apt-get -yq update apt-get -yq upgrade apt-get -yq install sudo apt-get -yq install gcc apt-get -yq install g++ apt-get -yq install make apt-get -yq install apache2 apt-get -yq install php5 apt-get -yq install php5-curl apt-get -yq install php5-mysql apt-get -yq install php5-gd apt-get -yq install mysql-common apt-get -yq install mysql-client apt-get -yq install mysql-server apt-get -yq install phpmyadmin apt-get -yq install samba echo '[global] workgroup = workgroup server string = %h server dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes;invalid users = root unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\sUNIX\spassword:* %n\n *Retype\snew\sUNIX\spassword:* %n\n *password\supdated\ssuccessfully*. socket options = TCP_NODELAY [homes] comment = Home Directories browseable = no writable = no create mask = 0700 directory mask = 0700 valid users = %S [www] comment = WWW writable = yes locking = no path = /var/www public = yes' > /etc/samba/smb.conf (echo SAMBAPASSWORD; echo SAMBAPASSWORD) | smbpasswd -sa root echo 'NameVirtualHost * ServerAdmin webmaster@localhost DocumentRoot /var/www/ Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On ' > /etc/apache2/sites-enabled/000-default /etc/init.d/apache2 stop /etc/init.d/samba stop /etc/init.d/apache2 start /etc/init.d/samba start edit: add this to set your MySQL password /etc/init.d/mysql stop echo "UPDATE mysql.user SET Password=PASSWORD('MySQLPasswrod') WHERE User='root'; FLUSH PRIVILEGES;" > /root/MySQLPassword mysqld_safe --init-file=/root/MySQLPassword & sleep 1 /etc/init.d/mysql stop sleep 1 /etc/init.d/mysql start end edit This is a bit specailised but you get the idea, if you save this to a file ('install' for example) all you have to do is: chmod +x install./install Some of my apt-get commands are not necessary, because apt will automatically get the dependencies but I prefer to be specific, for my installs.
[ "ubuntu", "vmware", "lamp" ]
6
5
2,188
5
0
2008-08-18T18:26:53.603000
2008-08-19T04:18:02.413000
15,047
15,207
Looking for example of Command pattern for UI
I'm working on a WinForm.Net application with the basic UI that includes toolbar buttons, menu items and keystrokes that all initiate the same underlying code. Right now the event handlers for each of these call a common method to perform the function. From what I've read this type of action could be handled by the Command design pattern with the additional benefit of automatically enabling/disabling or checking/unchecking the UI elements. I've been searching the net for a good example project, but really haven't found one. Does anyone have a good example that can be shared?
Let's first make sure we know what the Command pattern is: Command pattern encapsulates a request as an object and gives it a known public interface. Command Pattern ensures that every object receives its own commands and provides a decoupling between sender and receiver. A sender is an object that invokes an operation, and a receiver is an object that receives the request and acts on it. Here's an example for you. There are many ways you can do this, but I am going to take an interface base approach to make the code more testable for you. I am not sure what language you prefer, but I am writing this in C#. First, create an interface that describes a Command. public interface ICommand { void Execute(); } Second, create command objects that will implement the command interface. public class CutCommand: ICommand { public void Execute() { // Put code you like to execute when the CutCommand.Execute method is called. } } Third, we need to setup our invoker or sender object. public class TextOperations { public void Invoke(ICommand command) { command.Execute(); } } Fourth, create the client object that will use the invoker/sender object. public class Client { static void Main() { TextOperations textOperations = new TextOperations(); textOperation.Invoke(new CutCommand()); } } I hope you can take this example and put it into use for the application you are working on. If you would like more clarification, just let me know.
Looking for example of Command pattern for UI I'm working on a WinForm.Net application with the basic UI that includes toolbar buttons, menu items and keystrokes that all initiate the same underlying code. Right now the event handlers for each of these call a common method to perform the function. From what I've read this type of action could be handled by the Command design pattern with the additional benefit of automatically enabling/disabling or checking/unchecking the UI elements. I've been searching the net for a good example project, but really haven't found one. Does anyone have a good example that can be shared?
TITLE: Looking for example of Command pattern for UI QUESTION: I'm working on a WinForm.Net application with the basic UI that includes toolbar buttons, menu items and keystrokes that all initiate the same underlying code. Right now the event handlers for each of these call a common method to perform the function. From what I've read this type of action could be handled by the Command design pattern with the additional benefit of automatically enabling/disabling or checking/unchecking the UI elements. I've been searching the net for a good example project, but really haven't found one. Does anyone have a good example that can be shared? ANSWER: Let's first make sure we know what the Command pattern is: Command pattern encapsulates a request as an object and gives it a known public interface. Command Pattern ensures that every object receives its own commands and provides a decoupling between sender and receiver. A sender is an object that invokes an operation, and a receiver is an object that receives the request and acts on it. Here's an example for you. There are many ways you can do this, but I am going to take an interface base approach to make the code more testable for you. I am not sure what language you prefer, but I am writing this in C#. First, create an interface that describes a Command. public interface ICommand { void Execute(); } Second, create command objects that will implement the command interface. public class CutCommand: ICommand { public void Execute() { // Put code you like to execute when the CutCommand.Execute method is called. } } Third, we need to setup our invoker or sender object. public class TextOperations { public void Invoke(ICommand command) { command.Execute(); } } Fourth, create the client object that will use the invoker/sender object. public class Client { static void Main() { TextOperations textOperations = new TextOperations(); textOperation.Invoke(new CutCommand()); } } I hope you can take this example and put it into use for the application you are working on. If you would like more clarification, just let me know.
[ ".net", "design-patterns", "command" ]
18
20
9,968
5
0
2008-08-18T18:31:08.730000
2008-08-18T20:51:49.070000
15,062
15,136
How do I do a string replacement in a PowerShell function?
How do I convert function input parameters to the right type? I want to return a string that has part of the URL passed into it removed. This works, but it uses a hard-coded string: function CleanUrl($input) { $x = "http://google.com".Replace("http://", "") return $x } $SiteName = CleanUrl($HostHeader) echo $SiteName This fails: function CleanUrl($input) { $x = $input.Replace("http://", "") return $x } Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'. At M:\PowerShell\test.ps1:13 char:21 + $x = $input.Replace( <<<< "http://", "")
The concept here is correct. The problem is with the variable name you have chosen. $input is a reserved variable used by PowerShell to represent an array of pipeline input. If you change your variable name, you should not have any problem. PowerShell does have a replace operator, so you could make your function into function CleanUrl($url) { return $url -replace 'http://' }
How do I do a string replacement in a PowerShell function? How do I convert function input parameters to the right type? I want to return a string that has part of the URL passed into it removed. This works, but it uses a hard-coded string: function CleanUrl($input) { $x = "http://google.com".Replace("http://", "") return $x } $SiteName = CleanUrl($HostHeader) echo $SiteName This fails: function CleanUrl($input) { $x = $input.Replace("http://", "") return $x } Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'. At M:\PowerShell\test.ps1:13 char:21 + $x = $input.Replace( <<<< "http://", "")
TITLE: How do I do a string replacement in a PowerShell function? QUESTION: How do I convert function input parameters to the right type? I want to return a string that has part of the URL passed into it removed. This works, but it uses a hard-coded string: function CleanUrl($input) { $x = "http://google.com".Replace("http://", "") return $x } $SiteName = CleanUrl($HostHeader) echo $SiteName This fails: function CleanUrl($input) { $x = $input.Replace("http://", "") return $x } Method invocation failed because [System.Array+SZArrayEnumerator] doesn't contain a method named 'Replace'. At M:\PowerShell\test.ps1:13 char:21 + $x = $input.Replace( <<<< "http://", "") ANSWER: The concept here is correct. The problem is with the variable name you have chosen. $input is a reserved variable used by PowerShell to represent an array of pipeline input. If you change your variable name, you should not have any problem. PowerShell does have a replace operator, so you could make your function into function CleanUrl($url) { return $url -replace 'http://' }
[ "string", "powershell", "function", "replace" ]
18
16
54,649
4
0
2008-08-18T18:40:34.403000
2008-08-18T19:50:39.350000
15,102
16,690
Office VSTO Word 2003 project keeps trying to autoconvert to 2007
I am working on a Office Word add-in for Word 2003. When I reopen the project, the VS2008 auto covert dialog box opens and tries to convert it to the Word 2007 format. How can I reopen this file and keep it in the Word 2003 format?
Got a answer over at MSDN Forums This is the default behavior when you have Office 2007 installed on your development computer. You can modify this behavior under Tools->Options. For more informaiton, see the following threads: http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3762143&SiteID=1 http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3742203&SiteID=1&mode=1 I hope this helps, McLean Schofield
Office VSTO Word 2003 project keeps trying to autoconvert to 2007 I am working on a Office Word add-in for Word 2003. When I reopen the project, the VS2008 auto covert dialog box opens and tries to convert it to the Word 2007 format. How can I reopen this file and keep it in the Word 2003 format?
TITLE: Office VSTO Word 2003 project keeps trying to autoconvert to 2007 QUESTION: I am working on a Office Word add-in for Word 2003. When I reopen the project, the VS2008 auto covert dialog box opens and tries to convert it to the Word 2007 format. How can I reopen this file and keep it in the Word 2003 format? ANSWER: Got a answer over at MSDN Forums This is the default behavior when you have Office 2007 installed on your development computer. You can modify this behavior under Tools->Options. For more informaiton, see the following threads: http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3762143&SiteID=1 http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3742203&SiteID=1&mode=1 I hope this helps, McLean Schofield
[ "c#", "visual-studio", "ms-word", "vsto" ]
6
3
565
1
0
2008-08-18T19:17:00.447000
2008-08-19T18:31:39.880000
15,109
15,286
Visual Studio 2005 Setup project install crashes over Terminal Server
I have a setup project created by Visual Studio 2005, and consists of both a C#.NET 2.0 project and C++ MFC project, and the C++ run time. It works properly when run from the main console, but when run over a Terminal Server session on a Windows XP target, the install fails in the following way - When the Setup.exe is invoked, it immediately crashes before the first welcome screen is displayed. When invoked over a physical console, the setup runs normally. I figured I could go back to a lab machine to debug, but it runs fine on a lab machine over Terminal Server. I see other descriptions of setup problems over Terminal Server sessions, but I don't see a definite solution. Both machines have a nearly identical configuration except that the one that is failing also has the GoToMyPC Host installed. Has anyone else seen these problems, and how can I troubleshoot this? Thanks,
I had LOTS of issues with developing installers (and software in general) for terminal server. I hate that damn thing. Anyway, VS Setup Projects are just.msi files, and run using the Windows installer framework. This will drop a log file when it errors out, they're called MSIc183.LOG (swap the c183 for some random numbers and letters), and they go in your logged-in-user account's temp directory. The easiest way to find that is to type %TEMP% into the windows explorer address bar - once you're there have a look for these log files, they might give you a clue. Note - Under terminal server, sometimes the logs don't go directly into %TEMP%, but under numbered subdirectories. If you can't find any MSIXYZ.LOG files in there, look for directories called 1, 2, and so on, and look in those. If you find a log file, but can't get any clues from it, post it here. I've looked at more than I care to thing about, so I may be able to help
Visual Studio 2005 Setup project install crashes over Terminal Server I have a setup project created by Visual Studio 2005, and consists of both a C#.NET 2.0 project and C++ MFC project, and the C++ run time. It works properly when run from the main console, but when run over a Terminal Server session on a Windows XP target, the install fails in the following way - When the Setup.exe is invoked, it immediately crashes before the first welcome screen is displayed. When invoked over a physical console, the setup runs normally. I figured I could go back to a lab machine to debug, but it runs fine on a lab machine over Terminal Server. I see other descriptions of setup problems over Terminal Server sessions, but I don't see a definite solution. Both machines have a nearly identical configuration except that the one that is failing also has the GoToMyPC Host installed. Has anyone else seen these problems, and how can I troubleshoot this? Thanks,
TITLE: Visual Studio 2005 Setup project install crashes over Terminal Server QUESTION: I have a setup project created by Visual Studio 2005, and consists of both a C#.NET 2.0 project and C++ MFC project, and the C++ run time. It works properly when run from the main console, but when run over a Terminal Server session on a Windows XP target, the install fails in the following way - When the Setup.exe is invoked, it immediately crashes before the first welcome screen is displayed. When invoked over a physical console, the setup runs normally. I figured I could go back to a lab machine to debug, but it runs fine on a lab machine over Terminal Server. I see other descriptions of setup problems over Terminal Server sessions, but I don't see a definite solution. Both machines have a nearly identical configuration except that the one that is failing also has the GoToMyPC Host installed. Has anyone else seen these problems, and how can I troubleshoot this? Thanks, ANSWER: I had LOTS of issues with developing installers (and software in general) for terminal server. I hate that damn thing. Anyway, VS Setup Projects are just.msi files, and run using the Windows installer framework. This will drop a log file when it errors out, they're called MSIc183.LOG (swap the c183 for some random numbers and letters), and they go in your logged-in-user account's temp directory. The easiest way to find that is to type %TEMP% into the windows explorer address bar - once you're there have a look for these log files, they might give you a clue. Note - Under terminal server, sometimes the logs don't go directly into %TEMP%, but under numbered subdirectories. If you can't find any MSIXYZ.LOG files in there, look for directories called 1, 2, and so on, and look in those. If you find a log file, but can't get any clues from it, post it here. I've looked at more than I care to thing about, so I may be able to help
[ "visual-studio", "installation", "projects" ]
4
2
1,379
2
0
2008-08-18T19:24:23.973000
2008-08-18T22:00:58.103000
15,124
15,134
.net Job Interview
I have a job interview tomorrow for a.NET shop. For the past few years I have been developing in languages other than.NET and figure it is probably a good idea to brush up on what is cool and new in the world of.NET. I've been reading about LINQ and WPF but these are more technologies than trends. What else should I look at? Been reading things like: http://msdn.microsoft.com/en-us/library/bb332048.aspx http://msdn.microsoft.com/en-us/library/ms754130.aspx Edit As it turns out this interview was high level and we didn't really get into much which was more.NET specific than generics.
Take this with a grain of salt, but in my experience, LINQ and WPF are still in the realm of "yeah we'd like to get into that someday". Most shops are still on VS2005 and.NET 2.0, so I'd want to make sure I was up to speed on core facilities: generics ADO.NET WinForms / WebForms depending And so forth.
.net Job Interview I have a job interview tomorrow for a.NET shop. For the past few years I have been developing in languages other than.NET and figure it is probably a good idea to brush up on what is cool and new in the world of.NET. I've been reading about LINQ and WPF but these are more technologies than trends. What else should I look at? Been reading things like: http://msdn.microsoft.com/en-us/library/bb332048.aspx http://msdn.microsoft.com/en-us/library/ms754130.aspx Edit As it turns out this interview was high level and we didn't really get into much which was more.NET specific than generics.
TITLE: .net Job Interview QUESTION: I have a job interview tomorrow for a.NET shop. For the past few years I have been developing in languages other than.NET and figure it is probably a good idea to brush up on what is cool and new in the world of.NET. I've been reading about LINQ and WPF but these are more technologies than trends. What else should I look at? Been reading things like: http://msdn.microsoft.com/en-us/library/bb332048.aspx http://msdn.microsoft.com/en-us/library/ms754130.aspx Edit As it turns out this interview was high level and we didn't really get into much which was more.NET specific than generics. ANSWER: Take this with a grain of salt, but in my experience, LINQ and WPF are still in the realm of "yeah we'd like to get into that someday". Most shops are still on VS2005 and.NET 2.0, so I'd want to make sure I was up to speed on core facilities: generics ADO.NET WinForms / WebForms depending And so forth.
[ ".net" ]
6
1
1,863
5
0
2008-08-18T19:41:39.740000
2008-08-18T19:50:18.430000
15,133
16,431
Tool in Visual Studio 2008 for helping with Localization
Does anyone have any recommendations of tools that can be of assistance with moving literal values into resource files for localization? I've used a resharper plugin called RGreatX but was wondering if there is anything else out there. It's one heck of a long manual process for moving the strings across and think there must be a better way! RGreatX is OK but could be a bit slicker I feel.
Here's one: http://www.codeplex.com/ResourceRefactoring It'a actually a Microsoft "open source" Visual Studio(2005 and up) tool that integrates with the IDE. You can easily replace every occurence of a string with a ressource reference with a few clicks.
Tool in Visual Studio 2008 for helping with Localization Does anyone have any recommendations of tools that can be of assistance with moving literal values into resource files for localization? I've used a resharper plugin called RGreatX but was wondering if there is anything else out there. It's one heck of a long manual process for moving the strings across and think there must be a better way! RGreatX is OK but could be a bit slicker I feel.
TITLE: Tool in Visual Studio 2008 for helping with Localization QUESTION: Does anyone have any recommendations of tools that can be of assistance with moving literal values into resource files for localization? I've used a resharper plugin called RGreatX but was wondering if there is anything else out there. It's one heck of a long manual process for moving the strings across and think there must be a better way! RGreatX is OK but could be a bit slicker I feel. ANSWER: Here's one: http://www.codeplex.com/ResourceRefactoring It'a actually a Microsoft "open source" Visual Studio(2005 and up) tool that integrates with the IDE. You can easily replace every occurence of a string with a ressource reference with a few clicks.
[ "visual-studio", "winforms", "internationalization", "vsx", "extensibility" ]
15
15
4,342
4
0
2008-08-18T19:49:40.130000
2008-08-19T15:45:17.330000
15,139
15,234
Building Standalone Applications in JavaScript
With the increased power of JavaScript frameworks like YUI, JQuery, and Prototype, and debugging tools like Firebug, doing an application entirely in browser-side JavaScript looks like a great way to make simple applications like puzzle games and specialized calculators. Is there any downside to this other than exposing your source code? How should you handle data storage for this kind of program? Edit: yes, Gears and cookies can be used for local storage, but you can't easily get access to files and other objects the user already has around. You also can't save data to a file for a user without having them invoke some browser feature like printing to PDF or saving page as a file.
I've written several application in JS including a spreadsheet. Upside: great language short code-run-review cycle DOM manipulation is great for UI design clients on every computer (and phone) Downside: differences between browsers (especially IE) code base scalability (with no intrinsic support for namespaces and classes) no good debuggers (especially, again, for IE) performance (even though great progress has been made with FireFox and Safari) You need to write some server code as well. Bottom line: Go for it. I did.
Building Standalone Applications in JavaScript With the increased power of JavaScript frameworks like YUI, JQuery, and Prototype, and debugging tools like Firebug, doing an application entirely in browser-side JavaScript looks like a great way to make simple applications like puzzle games and specialized calculators. Is there any downside to this other than exposing your source code? How should you handle data storage for this kind of program? Edit: yes, Gears and cookies can be used for local storage, but you can't easily get access to files and other objects the user already has around. You also can't save data to a file for a user without having them invoke some browser feature like printing to PDF or saving page as a file.
TITLE: Building Standalone Applications in JavaScript QUESTION: With the increased power of JavaScript frameworks like YUI, JQuery, and Prototype, and debugging tools like Firebug, doing an application entirely in browser-side JavaScript looks like a great way to make simple applications like puzzle games and specialized calculators. Is there any downside to this other than exposing your source code? How should you handle data storage for this kind of program? Edit: yes, Gears and cookies can be used for local storage, but you can't easily get access to files and other objects the user already has around. You also can't save data to a file for a user without having them invoke some browser feature like printing to PDF or saving page as a file. ANSWER: I've written several application in JS including a spreadsheet. Upside: great language short code-run-review cycle DOM manipulation is great for UI design clients on every computer (and phone) Downside: differences between browsers (especially IE) code base scalability (with no intrinsic support for namespaces and classes) no good debuggers (especially, again, for IE) performance (even though great progress has been made with FireFox and Safari) You need to write some server code as well. Bottom line: Go for it. I did.
[ "javascript", "deployment", "web-applications", "browser" ]
23
16
33,758
14
0
2008-08-18T19:52:16.273000
2008-08-18T21:19:27.827000
15,142
15,277
What are the pros and cons to keeping SQL in Stored Procs versus Code
What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security
I am not a fan of stored procedures Stored Procedures are MORE maintainable because: * You don't have to recompile your C# app whenever you want to change some SQL You'll end up recompiling it anyway when datatypes change, or you want to return an extra column, or whatever. The number of times you can 'transparently' change the SQL out from underneath your app is pretty small on the whole You end up reusing SQL code. Programming languages, C# included, have this amazing thing, called a function. It means you can invoke the same block of code from multiple places! Amazing! You can then put the re-usable SQL code inside one of these, or if you want to get really high tech, you can use a library which does it for you. I believe they're called Object Relational Mappers, and are pretty common these days. Code repetition is the worst thing you can do when you're trying to build a maintainable application! Agreed, which is why storedprocs are a bad thing. It's much easier to refactor and decompose (break into smaller parts) code into functions than SQL into... blocks of SQL? You have 4 webservers and a bunch of windows apps which use the same SQL code Now you realized there is a small problem with the SQl code so do you rather...... change the proc in 1 place or push the code to all the webservers, reinstall all the desktop apps(clickonce might help) on all the windows boxes Why are your windows apps connecting directly to a central database? That seems like a HUGE security hole right there, and bottleneck as it rules out server-side caching. Shouldn't they be connecting via a web service or similar to your web servers? So, push 1 new sproc, or 4 new webservers? In this case it is easier to push one new sproc, but in my experience, 95% of 'pushed changes' affect the code and not the database. If you're pushing 20 things to the webservers that month, and 1 to the database, you hardly lose much if you instead push 21 things to the webservers, and zero to the database. More easily code reviewed. Can you explain how? I don't get this. Particularly seeing as the sprocs probably aren't in source control, and therefore can't be accessed via web-based SCM browsers and so on. More cons: Storedprocs live in the database, which appears to the outside world as a black box. Simple things like wanting to put them in source control becomes a nightmare. There's also the issue of sheer effort. It might make sense to break everything down into a million tiers if you're trying to justify to your CEO why it just cost them 7 million dollars to build some forums, but otherwise creating a storedproc for every little thing is just extra donkeywork for no benefit.
What are the pros and cons to keeping SQL in Stored Procs versus Code What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security
TITLE: What are the pros and cons to keeping SQL in Stored Procs versus Code QUESTION: What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security ANSWER: I am not a fan of stored procedures Stored Procedures are MORE maintainable because: * You don't have to recompile your C# app whenever you want to change some SQL You'll end up recompiling it anyway when datatypes change, or you want to return an extra column, or whatever. The number of times you can 'transparently' change the SQL out from underneath your app is pretty small on the whole You end up reusing SQL code. Programming languages, C# included, have this amazing thing, called a function. It means you can invoke the same block of code from multiple places! Amazing! You can then put the re-usable SQL code inside one of these, or if you want to get really high tech, you can use a library which does it for you. I believe they're called Object Relational Mappers, and are pretty common these days. Code repetition is the worst thing you can do when you're trying to build a maintainable application! Agreed, which is why storedprocs are a bad thing. It's much easier to refactor and decompose (break into smaller parts) code into functions than SQL into... blocks of SQL? You have 4 webservers and a bunch of windows apps which use the same SQL code Now you realized there is a small problem with the SQl code so do you rather...... change the proc in 1 place or push the code to all the webservers, reinstall all the desktop apps(clickonce might help) on all the windows boxes Why are your windows apps connecting directly to a central database? That seems like a HUGE security hole right there, and bottleneck as it rules out server-side caching. Shouldn't they be connecting via a web service or similar to your web servers? So, push 1 new sproc, or 4 new webservers? In this case it is easier to push one new sproc, but in my experience, 95% of 'pushed changes' affect the code and not the database. If you're pushing 20 things to the webservers that month, and 1 to the database, you hardly lose much if you instead push 21 things to the webservers, and zero to the database. More easily code reviewed. Can you explain how? I don't get this. Particularly seeing as the sprocs probably aren't in source control, and therefore can't be accessed via web-based SCM browsers and so on. More cons: Storedprocs live in the database, which appears to the outside world as a black box. Simple things like wanting to put them in source control becomes a nightmare. There's also the issue of sheer effort. It might make sense to break everything down into a million tiers if you're trying to justify to your CEO why it just cost them 7 million dollars to build some forums, but otherwise creating a storedproc for every little thing is just extra donkeywork for no benefit.
[ "c#", "sql", "sql-server", "stored-procedures" ]
274
179
79,299
47
0
2008-08-18T19:54:39.467000
2008-08-18T21:50:33.757000
15,163
15,591
Prevent a TreeView from firing events in VB6?
In some VB6 code, I have a handler for a TreeView's Collapse event: Private Sub MyTree_Collapse(ByVal Node as MSComCtlLib.Node) This is called whenever a node in the tree is collapsed, whether by the user or programmatically. As it turns out, through some roundabout execution, it may happen that this handler will wind up telling a node to collapse, leading to infinite recursion. I can think of multiple ways to skin this cat, but what seems simplest to me is to tell the TreeView not to raise events for some period of time. I can't find a simple call to let me do this, though. Has anyone successfully done this, or do I need to keep track of state in some other manner so I can respond appropriately when recursive events come along?
I think that like many events in VB, it can't be switched off. Just set a boolean flag as you've suggested.
Prevent a TreeView from firing events in VB6? In some VB6 code, I have a handler for a TreeView's Collapse event: Private Sub MyTree_Collapse(ByVal Node as MSComCtlLib.Node) This is called whenever a node in the tree is collapsed, whether by the user or programmatically. As it turns out, through some roundabout execution, it may happen that this handler will wind up telling a node to collapse, leading to infinite recursion. I can think of multiple ways to skin this cat, but what seems simplest to me is to tell the TreeView not to raise events for some period of time. I can't find a simple call to let me do this, though. Has anyone successfully done this, or do I need to keep track of state in some other manner so I can respond appropriately when recursive events come along?
TITLE: Prevent a TreeView from firing events in VB6? QUESTION: In some VB6 code, I have a handler for a TreeView's Collapse event: Private Sub MyTree_Collapse(ByVal Node as MSComCtlLib.Node) This is called whenever a node in the tree is collapsed, whether by the user or programmatically. As it turns out, through some roundabout execution, it may happen that this handler will wind up telling a node to collapse, leading to infinite recursion. I can think of multiple ways to skin this cat, but what seems simplest to me is to tell the TreeView not to raise events for some period of time. I can't find a simple call to let me do this, though. Has anyone successfully done this, or do I need to keep track of state in some other manner so I can respond appropriately when recursive events come along? ANSWER: I think that like many events in VB, it can't be switched off. Just set a boolean flag as you've suggested.
[ "events", "vb6", "treeview" ]
3
1
1,347
4
0
2008-08-18T20:11:22.787000
2008-08-19T02:43:58.220000
15,176
15,198
How to gauge the quality of a software product
I have a product, X, which we deliver to a client, C every month, including bugfixes, enhancements, new development etc.) Each month, I am asked to err "guarantee" the quality of the product. For this we use a number of statistics garnered from the tests that we do, such as: reopen rate (number of bugs reopened/number of corrected bugs tested) new bug rate (number of new, including regressions, bugs found during testing/number of corrected bugs tested) for each new enhancement, the new bug rate (the number of bugs found for this enhancement/number of mandays) and various other figures. It is impossible, for reasons we shan't go into, to test everything every time. So, my question is: How do I estimate the number and type of bugs that remain in my software? What testing strategies do I have to follow to make sure that the product is good? I know this is a bit of an open question, but hey, I also know that there are no simple solutions. Thanks.
I don't think you can ever really estimate the number of bugs in your app. Unless you use a language and process that allows formal proofs, you can never really be sure. Your time is probably better spent setting up processes to minimize bugs than trying to estimate how many you have. One of the most important things you can do is have a good QA team and good work item tracking. You may not be able to do full regression testing every time, but if you have a list of the changes you've made to the app since the last release, then your QA people (or person) can focus their testing on the parts of the app that are expected to be affected. Another thing that would be helpful is unit tests. The more of your codebase you have covered the more confident you can be that changes in one area didn't inadvertently affected another area. I've found this quite useful, as sometimes I'll change something and forget that it would affect another part of the app, and the unit tests showed the problem right away. Passed unit tests won't guarantee that you haven't broken anything, but they can help increase confidence that changes you make are working. Also, this is a bit redundant and obvious, but make sure you have good bug tracking software.:)
How to gauge the quality of a software product I have a product, X, which we deliver to a client, C every month, including bugfixes, enhancements, new development etc.) Each month, I am asked to err "guarantee" the quality of the product. For this we use a number of statistics garnered from the tests that we do, such as: reopen rate (number of bugs reopened/number of corrected bugs tested) new bug rate (number of new, including regressions, bugs found during testing/number of corrected bugs tested) for each new enhancement, the new bug rate (the number of bugs found for this enhancement/number of mandays) and various other figures. It is impossible, for reasons we shan't go into, to test everything every time. So, my question is: How do I estimate the number and type of bugs that remain in my software? What testing strategies do I have to follow to make sure that the product is good? I know this is a bit of an open question, but hey, I also know that there are no simple solutions. Thanks.
TITLE: How to gauge the quality of a software product QUESTION: I have a product, X, which we deliver to a client, C every month, including bugfixes, enhancements, new development etc.) Each month, I am asked to err "guarantee" the quality of the product. For this we use a number of statistics garnered from the tests that we do, such as: reopen rate (number of bugs reopened/number of corrected bugs tested) new bug rate (number of new, including regressions, bugs found during testing/number of corrected bugs tested) for each new enhancement, the new bug rate (the number of bugs found for this enhancement/number of mandays) and various other figures. It is impossible, for reasons we shan't go into, to test everything every time. So, my question is: How do I estimate the number and type of bugs that remain in my software? What testing strategies do I have to follow to make sure that the product is good? I know this is a bit of an open question, but hey, I also know that there are no simple solutions. Thanks. ANSWER: I don't think you can ever really estimate the number of bugs in your app. Unless you use a language and process that allows formal proofs, you can never really be sure. Your time is probably better spent setting up processes to minimize bugs than trying to estimate how many you have. One of the most important things you can do is have a good QA team and good work item tracking. You may not be able to do full regression testing every time, but if you have a list of the changes you've made to the app since the last release, then your QA people (or person) can focus their testing on the parts of the app that are expected to be affected. Another thing that would be helpful is unit tests. The more of your codebase you have covered the more confident you can be that changes in one area didn't inadvertently affected another area. I've found this quite useful, as sometimes I'll change something and forget that it would affect another part of the app, and the unit tests showed the problem right away. Passed unit tests won't guarantee that you haven't broken anything, but they can help increase confidence that changes you make are working. Also, this is a bit redundant and obvious, but make sure you have good bug tracking software.:)
[ "testing" ]
10
2
1,490
7
0
2008-08-18T20:18:22.257000
2008-08-18T20:43:17.573000
15,203
15,271
Best way to deploy subversion (SVN) in a multisite windows environment
We are planning on moving for MS Source Safe (ouch) to SVN. We are working mostly in a Microsoft environment (windows, Visual Studio,.NET) and we have developers in multiple sites. I heard about VisualSVN and integration with visual studio. On the other hand I can get someone to host SVN for me and use TortoiseSVN. Any recommendations? Any pitfalls I should avoid?
Hosting subversion is fantastically simple. At the risk of being labeled a brown nose (is there a badge for that?) Jeff Atwood did put up an article on installing subersion http://blog.codinghorror.com/setting-up-subversion-on-windows/ So really you could save yourself some money by running your own subversion server and you'll never have to worry about what happens to your code if your hosting company goes belly up. I would start with tortoise because it is free and is really easy to use. If you find you really need integration with VS then by all means try out visual svn. In my experience source control <-> editor integration is most useful for automatically opening files when you edit them. Subversion doesn't require you to open files so that big advantage is gone.
Best way to deploy subversion (SVN) in a multisite windows environment We are planning on moving for MS Source Safe (ouch) to SVN. We are working mostly in a Microsoft environment (windows, Visual Studio,.NET) and we have developers in multiple sites. I heard about VisualSVN and integration with visual studio. On the other hand I can get someone to host SVN for me and use TortoiseSVN. Any recommendations? Any pitfalls I should avoid?
TITLE: Best way to deploy subversion (SVN) in a multisite windows environment QUESTION: We are planning on moving for MS Source Safe (ouch) to SVN. We are working mostly in a Microsoft environment (windows, Visual Studio,.NET) and we have developers in multiple sites. I heard about VisualSVN and integration with visual studio. On the other hand I can get someone to host SVN for me and use TortoiseSVN. Any recommendations? Any pitfalls I should avoid? ANSWER: Hosting subversion is fantastically simple. At the risk of being labeled a brown nose (is there a badge for that?) Jeff Atwood did put up an article on installing subersion http://blog.codinghorror.com/setting-up-subversion-on-windows/ So really you could save yourself some money by running your own subversion server and you'll never have to worry about what happens to your code if your hosting company goes belly up. I would start with tortoise because it is free and is really easy to use. If you find you really need integration with VS then by all means try out visual svn. In my experience source control <-> editor integration is most useful for automatically opening files when you edit them. Subversion doesn't require you to open files so that big advantage is gone.
[ "windows", "svn", "tortoisesvn", "visualsvn-server" ]
5
3
7,154
5
0
2008-08-18T20:46:11.797000
2008-08-18T21:42:25.457000
15,204
15,210
What is the best way to iterate through a strongly-typed generic List<T>?
What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
For C#: foreach(ObjectType objectItem in objectTypeList) { //...do some stuff } Answer for VB.NET from Purple Ant: For Each objectItem as ObjectType in objectTypeList 'Do some stuff ' Next
What is the best way to iterate through a strongly-typed generic List<T>? What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
TITLE: What is the best way to iterate through a strongly-typed generic List<T>? QUESTION: What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET? ANSWER: For C#: foreach(ObjectType objectItem in objectTypeList) { //...do some stuff } Answer for VB.NET from Purple Ant: For Each objectItem as ObjectType in objectTypeList 'Do some stuff ' Next
[ "c#", ".net", "vb.net", "generics", "collections" ]
17
32
46,903
7
0
2008-08-18T20:49:22.443000
2008-08-18T20:52:25.980000
15,211
85,541
Subversion Management Tools
we have a lot of users running in different shared and solo-owned repositories in Subversion. As part of our work, we do project-shared code and individual work, and we need to control access, ideally on a group basis. Currenly, we use SVNManager to allow users to manage access and create repositories. However, in order to get that working we had to do quite a bit of hacking. Does anyone know of a free, open-source, linux-compatible SVN management system? Thanks for your help.
I would recommend SVN Access: http://www.jaj.com/projects/svnaccess/ or http://freshmeat.net/projects/svnaccess/ I have used it as is, and have modified it for an enterprise-wide solution at my day job.
Subversion Management Tools we have a lot of users running in different shared and solo-owned repositories in Subversion. As part of our work, we do project-shared code and individual work, and we need to control access, ideally on a group basis. Currenly, we use SVNManager to allow users to manage access and create repositories. However, in order to get that working we had to do quite a bit of hacking. Does anyone know of a free, open-source, linux-compatible SVN management system? Thanks for your help.
TITLE: Subversion Management Tools QUESTION: we have a lot of users running in different shared and solo-owned repositories in Subversion. As part of our work, we do project-shared code and individual work, and we need to control access, ideally on a group basis. Currenly, we use SVNManager to allow users to manage access and create repositories. However, in order to get that working we had to do quite a bit of hacking. Does anyone know of a free, open-source, linux-compatible SVN management system? Thanks for your help. ANSWER: I would recommend SVN Access: http://www.jaj.com/projects/svnaccess/ or http://freshmeat.net/projects/svnaccess/ I have used it as is, and have modified it for an enterprise-wide solution at my day job.
[ "linux", "svn" ]
13
4
14,215
4
0
2008-08-18T20:53:06.430000
2008-09-17T17:19:23.523000
15,219
18,507
UltraWebGrid: How to use a drop-down list in a column
I'm using the Infragistics grid and I'm having a difficult time using a drop-down list as the value selector for one of my columns. I tried reading the documentation but Infragistics' documentation is not so good. I've also taken a look at this discussion with no luck. What I'm doing so far: col.Type = ColumnType.DropDownList; col.DataType = "System.String"; col.ValueList = myValueList; where myValueList is: ValueList myValueList = new ValueList(); myValueList.Prompt = "My text prompt"; myValueList.DisplayStyle = ValueListDisplayStyle.DisplayText; foreach(MyObjectType item in MyObjectTypeCollection) { myValueList.ValueItems.Add(item.ID, item.Text); // Note that the ID is a string (not my design) } When I look at the page, I expect to see a drop-down list in the cells for this column, but my columns are empty.
I've found what was wrong. The column must allow updates. uwgMyGrid.Columns.FromKey("colTest").AllowUpdate = AllowUpdate.Yes;
UltraWebGrid: How to use a drop-down list in a column I'm using the Infragistics grid and I'm having a difficult time using a drop-down list as the value selector for one of my columns. I tried reading the documentation but Infragistics' documentation is not so good. I've also taken a look at this discussion with no luck. What I'm doing so far: col.Type = ColumnType.DropDownList; col.DataType = "System.String"; col.ValueList = myValueList; where myValueList is: ValueList myValueList = new ValueList(); myValueList.Prompt = "My text prompt"; myValueList.DisplayStyle = ValueListDisplayStyle.DisplayText; foreach(MyObjectType item in MyObjectTypeCollection) { myValueList.ValueItems.Add(item.ID, item.Text); // Note that the ID is a string (not my design) } When I look at the page, I expect to see a drop-down list in the cells for this column, but my columns are empty.
TITLE: UltraWebGrid: How to use a drop-down list in a column QUESTION: I'm using the Infragistics grid and I'm having a difficult time using a drop-down list as the value selector for one of my columns. I tried reading the documentation but Infragistics' documentation is not so good. I've also taken a look at this discussion with no luck. What I'm doing so far: col.Type = ColumnType.DropDownList; col.DataType = "System.String"; col.ValueList = myValueList; where myValueList is: ValueList myValueList = new ValueList(); myValueList.Prompt = "My text prompt"; myValueList.DisplayStyle = ValueListDisplayStyle.DisplayText; foreach(MyObjectType item in MyObjectTypeCollection) { myValueList.ValueItems.Add(item.ID, item.Text); // Note that the ID is a string (not my design) } When I look at the page, I expect to see a drop-down list in the cells for this column, but my columns are empty. ANSWER: I've found what was wrong. The column must allow updates. uwgMyGrid.Columns.FromKey("colTest").AllowUpdate = AllowUpdate.Yes;
[ "c#", "asp.net", "grid", "infragistics", "ultrawebgrid" ]
4
1
16,446
3
0
2008-08-18T21:03:18.483000
2008-08-20T18:24:25.567000
15,228
15,239
Setting an ASP.NET Master Page at runtime
I'm working on a site which needs to be able to support two or more looks, changable at runtime. I'd hoped to be able to handle the change with a CSS switch, but it looks like I'll need to use a different masterpage for each design. So, what's the best way to set the masterpage at runtime? Page.MasterPageFile can only be set in the Page.OnPreInit event. It looks like the solutions are to make all my pages inherit from a common base which handles the PreInit event, or to use an HttpModule which does that. Any advice?
I've done this once before, I did exactly what you described (Made all pages inherit from a custom page with an OnPreInit event). Also I had a custom Application_PreRequestHandlerExecute in my Global.asax.cs for setting Page.StyleSheetTheme for doing image/css changes that didn't require a different Master Page.
Setting an ASP.NET Master Page at runtime I'm working on a site which needs to be able to support two or more looks, changable at runtime. I'd hoped to be able to handle the change with a CSS switch, but it looks like I'll need to use a different masterpage for each design. So, what's the best way to set the masterpage at runtime? Page.MasterPageFile can only be set in the Page.OnPreInit event. It looks like the solutions are to make all my pages inherit from a common base which handles the PreInit event, or to use an HttpModule which does that. Any advice?
TITLE: Setting an ASP.NET Master Page at runtime QUESTION: I'm working on a site which needs to be able to support two or more looks, changable at runtime. I'd hoped to be able to handle the change with a CSS switch, but it looks like I'll need to use a different masterpage for each design. So, what's the best way to set the masterpage at runtime? Page.MasterPageFile can only be set in the Page.OnPreInit event. It looks like the solutions are to make all my pages inherit from a common base which handles the PreInit event, or to use an HttpModule which does that. Any advice? ANSWER: I've done this once before, I did exactly what you described (Made all pages inherit from a custom page with an OnPreInit event). Also I had a custom Application_PreRequestHandlerExecute in my Global.asax.cs for setting Page.StyleSheetTheme for doing image/css changes that didn't require a different Master Page.
[ ".net", "asp.net", "master-pages" ]
9
6
8,783
6
0
2008-08-18T21:17:15.523000
2008-08-18T21:22:01.363000
15,240
15,269
How do you create a debug only function that takes a variable argument list? Like printf()
I'd like to make a debug logging function with the same parameters as printf. But one that can be removed by the pre-processor during optimized builds. For example: Debug_Print("Warning: value %d > 3!\n", value); I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not.
I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax: #include void XTrace0(LPCTSTR lpszText) {::OutputDebugString(lpszText); } void XTrace(LPCTSTR lpszFormat,...) { va_list args; va_start(args, lpszFormat); int nBuf; TCHAR szBuffer[512]; // get rid of this hard-coded buffer nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args);::OutputDebugString(szBuffer); va_end(args); } Then a typical #ifdef switch: #ifdef _DEBUG #define XTRACE XTrace #else #define XTRACE #endif Well that can be cleaned up quite a bit but it's the basic idea.
How do you create a debug only function that takes a variable argument list? Like printf() I'd like to make a debug logging function with the same parameters as printf. But one that can be removed by the pre-processor during optimized builds. For example: Debug_Print("Warning: value %d > 3!\n", value); I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not.
TITLE: How do you create a debug only function that takes a variable argument list? Like printf() QUESTION: I'd like to make a debug logging function with the same parameters as printf. But one that can be removed by the pre-processor during optimized builds. For example: Debug_Print("Warning: value %d > 3!\n", value); I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not. ANSWER: I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax: #include void XTrace0(LPCTSTR lpszText) {::OutputDebugString(lpszText); } void XTrace(LPCTSTR lpszFormat,...) { va_list args; va_start(args, lpszFormat); int nBuf; TCHAR szBuffer[512]; // get rid of this hard-coded buffer nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args);::OutputDebugString(szBuffer); va_end(args); } Then a typical #ifdef switch: #ifdef _DEBUG #define XTRACE XTrace #else #define XTRACE #endif Well that can be cleaned up quite a bit but it's the basic idea.
[ "c++", "c", "c-preprocessor" ]
43
25
37,561
14
0
2008-08-18T21:22:20.753000
2008-08-18T21:42:09.857000
15,241
15,323
Does anyone have any real-world experience of CSLA?
The main web application of my company is crying out for a nifty set of libraries to make it in some way maintainable and scalable, and one of my colleagues has suggested CSLA. So I've bought the book but as: programmers don't read books anymore I wanted to gauge the SOFlow community's opinion of it. So here are my questions: How may people are using CSLA? What are the pros and cons? Does CSLA really not fit in with TDD? What are my alternatives? If you have stopped using it or decided against why?
Before I specifically answer your question, I'd like to put a few thoughts down. Is CSLA right for your project? It depends. I would personally consider CSLA for desktop based applications that does not value unit testing as a high priority. CSLA is great if you want to easily scale to an n-tier application. CSLA tends to get some flack because it does not allow pure unit testing. This is true, however like anything in technology, I believe that there is No One True Way. Unit testing may not be something you are undertaking for a specific project. What works for one team and one project may not work for another team or other project. There are also many misconceptions in regards to CSLA. It is not an ORM. it is not a competitor to NHibernate (in fact using CLSA Business Objects & NHibernate as data access fit really well together). It formalises the concept of a Mobile Object. 1. How many people are using CSLA? Based on the CSLA Forums, I would say there are quite a number of CSLA based projects out there. Honestly though, I have no idea how many people are actually using it. I have used it in the past on two projects. 2. What are the pros and cons? While it is difficult to summarise in a short list, here is some of the pro/con's that come to mind. Pros: It's easy to get new developers up to speed. The CSLA book and sample app are great resources to get up to speed. The Validation framework is truly world class - and has been "borrowed" for many many other non-CSLA projects and technologies. n-Level Undo within your business objects Config line change for n-Tier scalability (Note: not even a recompile is necessary) Key technologies are abstracted from the "real" code. When WCF was introduced, it had minimal impact on CSLA code. It is possible to share your business objects between windows and web projects. CSLA promotes the normalization of behaviour rather than the normalization of data (leaving the database for data normalization). Cons: Difficulty in unit testing Lack of Separation of Concern (generally your business objects have data access code inside them). As CSLA promotes the normalization of behavior, rather than the normalization of data, and this can result in business objects that are named similarly, but have different purposes. This can cause some confusion and a feeling like you are not reusing objects appropriately. That said, once the physiological leap is taken, it more than makes sense - it seems inappropriate to structure objects the "old" way. It's not "in fashion" to build applications this way. You may struggle to get developers who are passionate about the technology. 3. After reading this does CSLA really not fit in with TDD? I haven't found an effective way to do TDD with CSLA. That said, I am sure there are many smarter people out there than me that may have tried this with greater success. 4. What are my alternatives? Domain-Driven-Design is getting big push at the moment (and rightfully so - it's fantastic for some applications). There are also a number of interesting patterns developing from the introduction of LINQ (and LINQ to SQL, Entity Framework, etc). Fowlers book PoEAA, details many patterns that may be suitable for your application. Note that some patterns are competing (i.e. Active Record and Repository), and thus are meant to be used for specific scenarios. While CSLA doesn't exactly match any of the patterns described in that book, it most closely resembles Active Record (although I feel it is short-sighted to claim an exact match for this pattern). 5. If you have stopped using it or decided against why? I didn't fully recommend CSLA for my last project, because I believe the scope of the application is too large for the benefits CSLA provides. I would not use CSLA on a web project. I feel there are other technologies better suited to building applications in that environment. In summary, while CSLA is anything but a silver bullet, it is appropriate for some scenarios. Hope this helps!
Does anyone have any real-world experience of CSLA? The main web application of my company is crying out for a nifty set of libraries to make it in some way maintainable and scalable, and one of my colleagues has suggested CSLA. So I've bought the book but as: programmers don't read books anymore I wanted to gauge the SOFlow community's opinion of it. So here are my questions: How may people are using CSLA? What are the pros and cons? Does CSLA really not fit in with TDD? What are my alternatives? If you have stopped using it or decided against why?
TITLE: Does anyone have any real-world experience of CSLA? QUESTION: The main web application of my company is crying out for a nifty set of libraries to make it in some way maintainable and scalable, and one of my colleagues has suggested CSLA. So I've bought the book but as: programmers don't read books anymore I wanted to gauge the SOFlow community's opinion of it. So here are my questions: How may people are using CSLA? What are the pros and cons? Does CSLA really not fit in with TDD? What are my alternatives? If you have stopped using it or decided against why? ANSWER: Before I specifically answer your question, I'd like to put a few thoughts down. Is CSLA right for your project? It depends. I would personally consider CSLA for desktop based applications that does not value unit testing as a high priority. CSLA is great if you want to easily scale to an n-tier application. CSLA tends to get some flack because it does not allow pure unit testing. This is true, however like anything in technology, I believe that there is No One True Way. Unit testing may not be something you are undertaking for a specific project. What works for one team and one project may not work for another team or other project. There are also many misconceptions in regards to CSLA. It is not an ORM. it is not a competitor to NHibernate (in fact using CLSA Business Objects & NHibernate as data access fit really well together). It formalises the concept of a Mobile Object. 1. How many people are using CSLA? Based on the CSLA Forums, I would say there are quite a number of CSLA based projects out there. Honestly though, I have no idea how many people are actually using it. I have used it in the past on two projects. 2. What are the pros and cons? While it is difficult to summarise in a short list, here is some of the pro/con's that come to mind. Pros: It's easy to get new developers up to speed. The CSLA book and sample app are great resources to get up to speed. The Validation framework is truly world class - and has been "borrowed" for many many other non-CSLA projects and technologies. n-Level Undo within your business objects Config line change for n-Tier scalability (Note: not even a recompile is necessary) Key technologies are abstracted from the "real" code. When WCF was introduced, it had minimal impact on CSLA code. It is possible to share your business objects between windows and web projects. CSLA promotes the normalization of behaviour rather than the normalization of data (leaving the database for data normalization). Cons: Difficulty in unit testing Lack of Separation of Concern (generally your business objects have data access code inside them). As CSLA promotes the normalization of behavior, rather than the normalization of data, and this can result in business objects that are named similarly, but have different purposes. This can cause some confusion and a feeling like you are not reusing objects appropriately. That said, once the physiological leap is taken, it more than makes sense - it seems inappropriate to structure objects the "old" way. It's not "in fashion" to build applications this way. You may struggle to get developers who are passionate about the technology. 3. After reading this does CSLA really not fit in with TDD? I haven't found an effective way to do TDD with CSLA. That said, I am sure there are many smarter people out there than me that may have tried this with greater success. 4. What are my alternatives? Domain-Driven-Design is getting big push at the moment (and rightfully so - it's fantastic for some applications). There are also a number of interesting patterns developing from the introduction of LINQ (and LINQ to SQL, Entity Framework, etc). Fowlers book PoEAA, details many patterns that may be suitable for your application. Note that some patterns are competing (i.e. Active Record and Repository), and thus are meant to be used for specific scenarios. While CSLA doesn't exactly match any of the patterns described in that book, it most closely resembles Active Record (although I feel it is short-sighted to claim an exact match for this pattern). 5. If you have stopped using it or decided against why? I didn't fully recommend CSLA for my last project, because I believe the scope of the application is too large for the benefits CSLA provides. I would not use CSLA on a web project. I feel there are other technologies better suited to building applications in that environment. In summary, while CSLA is anything but a silver bullet, it is appropriate for some scenarios. Hope this helps!
[ ".net", "frameworks", "csla" ]
51
72
13,121
23
0
2008-08-18T21:23:50.877000
2008-08-18T22:32:10.427000
15,247
17,132
Simplest way to make a Google Map mashup?
Given a list of locations such as El Cerrito, CA Corvallis, OR Morganton, NC New York, NY San Diego, CA What's the easiest way to generate a Google Map with pushpins for each location?
I'm assuming you have the basics for Maps in your code already with your API Key. Everything in Google Maps is based off of latitude (lat) and longitude (lng). So to create a simple marker you will just create a GMarker with the lat and lng. var where = new GLatLng(37.925243,-122.307358); //Lat and Lng for El Cerrito, CA var marker = new GMarker(where); // Create marker (Pinhead thingy) map.setCenter(where); // Center map on marker map.addOverlay(marker); // Add marker to map However if you don't want to look up the Lat and Lng for each city you can use Google's Geo Coder. Heres an example: var address = "El Cerrito, CA"; var geocoder = new GClientGeocoder; geocoder.getLatLng(address, function(point) { if (point) { map.clearOverlays(); // Clear all markers map.addOverlay(new GMarker(point)); // Add marker to map map.setCenter(point, 10); // Center and zoom map on marker } }); So I would just create an array of GLatLng's of every city from the GeoCoder and then draw them on the map.
Simplest way to make a Google Map mashup? Given a list of locations such as El Cerrito, CA Corvallis, OR Morganton, NC New York, NY San Diego, CA What's the easiest way to generate a Google Map with pushpins for each location?
TITLE: Simplest way to make a Google Map mashup? QUESTION: Given a list of locations such as El Cerrito, CA Corvallis, OR Morganton, NC New York, NY San Diego, CA What's the easiest way to generate a Google Map with pushpins for each location? ANSWER: I'm assuming you have the basics for Maps in your code already with your API Key. Everything in Google Maps is based off of latitude (lat) and longitude (lng). So to create a simple marker you will just create a GMarker with the lat and lng. var where = new GLatLng(37.925243,-122.307358); //Lat and Lng for El Cerrito, CA var marker = new GMarker(where); // Create marker (Pinhead thingy) map.setCenter(where); // Center map on marker map.addOverlay(marker); // Add marker to map However if you don't want to look up the Lat and Lng for each city you can use Google's Geo Coder. Heres an example: var address = "El Cerrito, CA"; var geocoder = new GClientGeocoder; geocoder.getLatLng(address, function(point) { if (point) { map.clearOverlays(); // Clear all markers map.addOverlay(new GMarker(point)); // Add marker to map map.setCenter(point, 10); // Center and zoom map on marker } }); So I would just create an array of GLatLng's of every city from the GeoCoder and then draw them on the map.
[ "html", "google-maps" ]
9
11
4,908
5
0
2008-08-18T21:26:57.750000
2008-08-19T22:50:52.487000
15,254
15,343
Can placement new for arrays be used in a portable way?
Is it possible to actually make use of placement new in portable code when using it for arrays? It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case. The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption: #include #include class A { public: A(): data(0) {} virtual ~A() {} int data; }; int main() { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = new(pBuffer) A[NUMELEMENTS]; // With VC++, pA will be four bytes higher than pBuffer printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // Debug runtime will assert here due to heap corruption delete[] pBuffer; return 0; } Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap. So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example: int main(int argc, char* argv[]) { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = (A*)pBuffer; for(int i = 0; i < NUMELEMENTS; ++i) { pA[i] = new (pA + i) A(); } printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // dont forget to destroy! for(int i = 0; i < NUMELEMENTS; ++i) { pA[i].~A(); } delete[] pBuffer; return 0; } Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks;) Note: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point:) Hope it helps in some way! Edit: The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
Can placement new for arrays be used in a portable way? Is it possible to actually make use of placement new in portable code when using it for arrays? It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case. The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption: #include #include class A { public: A(): data(0) {} virtual ~A() {} int data; }; int main() { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = new(pBuffer) A[NUMELEMENTS]; // With VC++, pA will be four bytes higher than pBuffer printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // Debug runtime will assert here due to heap corruption delete[] pBuffer; return 0; } Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap. So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
TITLE: Can placement new for arrays be used in a portable way? QUESTION: Is it possible to actually make use of placement new in portable code when using it for arrays? It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case. The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption: #include #include class A { public: A(): data(0) {} virtual ~A() {} int data; }; int main() { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = new(pBuffer) A[NUMELEMENTS]; // With VC++, pA will be four bytes higher than pBuffer printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // Debug runtime will assert here due to heap corruption delete[] pBuffer; return 0; } Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap. So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in. ANSWER: Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example: int main(int argc, char* argv[]) { const int NUMELEMENTS=20; char *pBuffer = new char[NUMELEMENTS*sizeof(A)]; A *pA = (A*)pBuffer; for(int i = 0; i < NUMELEMENTS; ++i) { pA[i] = new (pA + i) A(); } printf("Buffer address: %x, Array address: %x\n", pBuffer, pA); // dont forget to destroy! for(int i = 0; i < NUMELEMENTS; ++i) { pA[i].~A(); } delete[] pBuffer; return 0; } Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks;) Note: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point:) Hope it helps in some way! Edit: The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
[ "c++", "arrays", "compiler-construction", "portability", "overhead" ]
49
33
17,831
8
0
2008-08-18T21:33:35.863000
2008-08-18T22:53:05.003000
15,266
2,195,434
Objective-C/Cocoa: How do I accept a bad server certificate?
Using NSURLRequest, I am trying to access a web site that has an expired certificate. When I send the request, my connection:didFailWithError delegate method is invoked with the following info: -1203, NSURLErrorDomain, bad server certificate My searches have only turned up one solution: a hidden class method in NSURLRequest: [NSURLRequest setAllowsAnyHTTPSCertificate:YES forHost:myHost]; However, I don't want to use private APIs in a production app for obvious reasons. Any suggestions on what to do? Do I need to use CFNetwork APIs, and if so, two questions: Any sample code I can use to get started? I haven't found any online. If I use CFNetwork for this, do I have to ditch NSURL entirely? EDIT: iPhone OS 3.0 introduced a supported method for doing this. More details here: How to use NSURLConnection to connect with SSL for an untrusted cert?
iPhone OS 3.0 introduced a supported way of doing this that doesn't require the lower-level CFNetwork APIs. More details here: How to use NSURLConnection to connect with SSL for an untrusted cert?
Objective-C/Cocoa: How do I accept a bad server certificate? Using NSURLRequest, I am trying to access a web site that has an expired certificate. When I send the request, my connection:didFailWithError delegate method is invoked with the following info: -1203, NSURLErrorDomain, bad server certificate My searches have only turned up one solution: a hidden class method in NSURLRequest: [NSURLRequest setAllowsAnyHTTPSCertificate:YES forHost:myHost]; However, I don't want to use private APIs in a production app for obvious reasons. Any suggestions on what to do? Do I need to use CFNetwork APIs, and if so, two questions: Any sample code I can use to get started? I haven't found any online. If I use CFNetwork for this, do I have to ditch NSURL entirely? EDIT: iPhone OS 3.0 introduced a supported method for doing this. More details here: How to use NSURLConnection to connect with SSL for an untrusted cert?
TITLE: Objective-C/Cocoa: How do I accept a bad server certificate? QUESTION: Using NSURLRequest, I am trying to access a web site that has an expired certificate. When I send the request, my connection:didFailWithError delegate method is invoked with the following info: -1203, NSURLErrorDomain, bad server certificate My searches have only turned up one solution: a hidden class method in NSURLRequest: [NSURLRequest setAllowsAnyHTTPSCertificate:YES forHost:myHost]; However, I don't want to use private APIs in a production app for obvious reasons. Any suggestions on what to do? Do I need to use CFNetwork APIs, and if so, two questions: Any sample code I can use to get started? I haven't found any online. If I use CFNetwork for this, do I have to ditch NSURL entirely? EDIT: iPhone OS 3.0 introduced a supported method for doing this. More details here: How to use NSURLConnection to connect with SSL for an untrusted cert? ANSWER: iPhone OS 3.0 introduced a supported way of doing this that doesn't require the lower-level CFNetwork APIs. More details here: How to use NSURLConnection to connect with SSL for an untrusted cert?
[ "objective-c", "cocoa", "networking" ]
11
2
9,916
6
0
2008-08-18T21:40:01.657000
2010-02-03T21:05:10.617000
15,272
15,904
Aging Data Structure in C#
I want a data structure that will allow querying how many items in last X minutes. An item may just be a simple identifier or a more complex data structure, preferably the timestamp of the item will be in the item, rather than stored outside (as a hash or similar, wouldn't want to have problems with multiple items having same timestamp). So far it seems that with LINQ I could easily filter items with timestamp greater than a given time and aggregate a count. Though I'm hesitant to try to work.NET 3.5 specific stuff into my production environment yet. Are there any other suggestions for a similar data structure? The other part that I'm interested in is aging old data out, If I'm only going to be asking for counts of items less than 6 hours ago I would like anything older than that to be removed from my data structure because this may be a long-running program.
A simple linked list can be used for this. Basically you add new items to the end, and remove too old items from the start, it is a cheap data structure. example-code: list.push_end(new_data) while list.head.age >= age_limit: list.pop_head() If the list will be busy enough to warrant chopping off larger pieces than one at a time, then I agree with dmo, use a tree structure or something similar that allows pruning on a higher level.
Aging Data Structure in C# I want a data structure that will allow querying how many items in last X minutes. An item may just be a simple identifier or a more complex data structure, preferably the timestamp of the item will be in the item, rather than stored outside (as a hash or similar, wouldn't want to have problems with multiple items having same timestamp). So far it seems that with LINQ I could easily filter items with timestamp greater than a given time and aggregate a count. Though I'm hesitant to try to work.NET 3.5 specific stuff into my production environment yet. Are there any other suggestions for a similar data structure? The other part that I'm interested in is aging old data out, If I'm only going to be asking for counts of items less than 6 hours ago I would like anything older than that to be removed from my data structure because this may be a long-running program.
TITLE: Aging Data Structure in C# QUESTION: I want a data structure that will allow querying how many items in last X minutes. An item may just be a simple identifier or a more complex data structure, preferably the timestamp of the item will be in the item, rather than stored outside (as a hash or similar, wouldn't want to have problems with multiple items having same timestamp). So far it seems that with LINQ I could easily filter items with timestamp greater than a given time and aggregate a count. Though I'm hesitant to try to work.NET 3.5 specific stuff into my production environment yet. Are there any other suggestions for a similar data structure? The other part that I'm interested in is aging old data out, If I'm only going to be asking for counts of items less than 6 hours ago I would like anything older than that to be removed from my data structure because this may be a long-running program. ANSWER: A simple linked list can be used for this. Basically you add new items to the end, and remove too old items from the start, it is a cheap data structure. example-code: list.push_end(new_data) while list.head.age >= age_limit: list.pop_head() If the list will be busy enough to warrant chopping off larger pieces than one at a time, then I agree with dmo, use a tree structure or something similar that allows pruning on a higher level.
[ "c#", "linq", ".net-3.5", "data-structures" ]
3
3
1,209
3
0
2008-08-18T21:44:07.933000
2008-08-19T08:53:14.697000
15,302
15,324
What do the getUTC* methods on the date object do?
What does it mean when you get or create a date in UTC format in JavaScript?
A date represents a specific point in time. This point in time will be called differently in different places. As I write this, it's 00:27 on Tuesday in Germany, 23:27 on Monday in the UK and 18:27 on Monday in New York. To take an example method: getDay returns the day of the week in the local timezone. Right now, for a user in Germany, it would return 2. For a user in the UK or US, it would return 1. In an hour's time, it will return 2 for the user in the UK (because it will then be 00:27 on Tuesday there). The..UTC.. methods deal with the representation of the time in UTC (also known as GMT). In winter, this is the same timezone as the UK, in summer it's an hour behind the time in the UK. It's summer as I write this. getUTCDay will return 1 (Monday), getUTCHours will return 22, getUTCMinutes will return 27. So it's 22:27 on Monday in the UTC timezone. Whereas the plain get... functions will return different values depending on where the user is, the getUTC.. functions will return those same values no matter where the user is.
What do the getUTC* methods on the date object do? What does it mean when you get or create a date in UTC format in JavaScript?
TITLE: What do the getUTC* methods on the date object do? QUESTION: What does it mean when you get or create a date in UTC format in JavaScript? ANSWER: A date represents a specific point in time. This point in time will be called differently in different places. As I write this, it's 00:27 on Tuesday in Germany, 23:27 on Monday in the UK and 18:27 on Monday in New York. To take an example method: getDay returns the day of the week in the local timezone. Right now, for a user in Germany, it would return 2. For a user in the UK or US, it would return 1. In an hour's time, it will return 2 for the user in the UK (because it will then be 00:27 on Tuesday there). The..UTC.. methods deal with the representation of the time in UTC (also known as GMT). In winter, this is the same timezone as the UK, in summer it's an hour behind the time in the UK. It's summer as I write this. getUTCDay will return 1 (Monday), getUTCHours will return 22, getUTCMinutes will return 27. So it's 22:27 on Monday in the UTC timezone. Whereas the plain get... functions will return different values depending on where the user is, the getUTC.. functions will return those same values no matter where the user is.
[ "javascript", "date", "utc" ]
13
14
2,179
3
0
2008-08-18T22:14:34.427000
2008-08-18T22:32:34.233000
15,305
15,367
JNI and Java: ant calling make or make calling ant?
I'm just about to make my first trip into the world of JNI (Java Native Interface) to provide file system change notifications from platform specific C/C++ code to Java. That is unless someone suggest some brilliant library for doing this that I've missed. Being new to JNI I've managed to find much documentation on the interface side of it and library generation side of it, but I've not found much on building the native library. I've got an existing build based on ant for the pre-existing Java source, so I'm trying to work out if I should get ant to call make to create the library or if it's best to get make to call ant after creating the library? Neither option jumps out as being terribly nice, but both seem better than trying to get ant to call a compiler to compile the code and generate the library directly.
I strongly dislike make because of its implicit ruleset and treatment of whitespace. Personally I would use cpp tasks ( http://ant-contrib.sourceforge.net/cpptasks/index.html ) to do my C compilation. They are not as flexible as make but they are also far less complex and it will mean you don't have to burden your developers with learning make.
JNI and Java: ant calling make or make calling ant? I'm just about to make my first trip into the world of JNI (Java Native Interface) to provide file system change notifications from platform specific C/C++ code to Java. That is unless someone suggest some brilliant library for doing this that I've missed. Being new to JNI I've managed to find much documentation on the interface side of it and library generation side of it, but I've not found much on building the native library. I've got an existing build based on ant for the pre-existing Java source, so I'm trying to work out if I should get ant to call make to create the library or if it's best to get make to call ant after creating the library? Neither option jumps out as being terribly nice, but both seem better than trying to get ant to call a compiler to compile the code and generate the library directly.
TITLE: JNI and Java: ant calling make or make calling ant? QUESTION: I'm just about to make my first trip into the world of JNI (Java Native Interface) to provide file system change notifications from platform specific C/C++ code to Java. That is unless someone suggest some brilliant library for doing this that I've missed. Being new to JNI I've managed to find much documentation on the interface side of it and library generation side of it, but I've not found much on building the native library. I've got an existing build based on ant for the pre-existing Java source, so I'm trying to work out if I should get ant to call make to create the library or if it's best to get make to call ant after creating the library? Neither option jumps out as being terribly nice, but both seem better than trying to get ant to call a compiler to compile the code and generate the library directly. ANSWER: I strongly dislike make because of its implicit ruleset and treatment of whitespace. Personally I would use cpp tasks ( http://ant-contrib.sourceforge.net/cpptasks/index.html ) to do my C compilation. They are not as flexible as make but they are also far less complex and it will mean you don't have to burden your developers with learning make.
[ "java", "ant", "makefile", "java-native-interface", "automake" ]
7
3
3,528
5
0
2008-08-18T22:15:39.087000
2008-08-18T23:19:59.910000
15,349
15,450
Which scripting language to support in an existing codebase?
I'm looking at adding scripting functionality to an existing codebase and am weighing up the pros/cons of various packages. Lua is probably the most obvious choice, but I was wondering if people have any other suggestions based on their experience. Scripts will be triggered upon certain events and may stay resident for a period of time. For example upon startup a script may define several options which the program presents to the user as a number of buttons. Upon selecting one of these buttons the program will notify the script where further events may occur. These are the only real requirements; Must be a cross-platform library that is compilable from source Scripts must be able to call registered code-side functions Code must be able to call script-side functions Be used within a C/C++ codebase.
Based on my own experience: Python. IMHO this is a good choice. We have a pretty big code base with a lot of users and they like it a lot. Ruby. There are some really nice apps such as Google Sketchup that use this. I wrote a Sketchup plugin and thought it was pretty nice. Tcl. This is the old-school embeddable scripting language of choice, but it doesn't have a lot of momentum these days. It's high quality though, they use it on the Hubble Space Telescope! Lua. I've only done baby stuff with it but IIRC it only has a floating point numeric type, so make sure that's not a problem for the data you will be working with. We're lucky to be living in the golden age of scripting, so it's hard to make a bad choice if you choose from any of the popular ones.
Which scripting language to support in an existing codebase? I'm looking at adding scripting functionality to an existing codebase and am weighing up the pros/cons of various packages. Lua is probably the most obvious choice, but I was wondering if people have any other suggestions based on their experience. Scripts will be triggered upon certain events and may stay resident for a period of time. For example upon startup a script may define several options which the program presents to the user as a number of buttons. Upon selecting one of these buttons the program will notify the script where further events may occur. These are the only real requirements; Must be a cross-platform library that is compilable from source Scripts must be able to call registered code-side functions Code must be able to call script-side functions Be used within a C/C++ codebase.
TITLE: Which scripting language to support in an existing codebase? QUESTION: I'm looking at adding scripting functionality to an existing codebase and am weighing up the pros/cons of various packages. Lua is probably the most obvious choice, but I was wondering if people have any other suggestions based on their experience. Scripts will be triggered upon certain events and may stay resident for a period of time. For example upon startup a script may define several options which the program presents to the user as a number of buttons. Upon selecting one of these buttons the program will notify the script where further events may occur. These are the only real requirements; Must be a cross-platform library that is compilable from source Scripts must be able to call registered code-side functions Code must be able to call script-side functions Be used within a C/C++ codebase. ANSWER: Based on my own experience: Python. IMHO this is a good choice. We have a pretty big code base with a lot of users and they like it a lot. Ruby. There are some really nice apps such as Google Sketchup that use this. I wrote a Sketchup plugin and thought it was pretty nice. Tcl. This is the old-school embeddable scripting language of choice, but it doesn't have a lot of momentum these days. It's high quality though, they use it on the Hubble Space Telescope! Lua. I've only done baby stuff with it but IIRC it only has a floating point numeric type, so make sure that's not a problem for the data you will be working with. We're lucky to be living in the golden age of scripting, so it's hard to make a bad choice if you choose from any of the popular ones.
[ "scripting", "data-driven" ]
7
3
501
9
0
2008-08-18T23:05:04.120000
2008-08-19T00:52:54.027000
15,366
15,368
Only accepting certain ajax requests from authenticated users
What's the best practice for making sure that certain ajax calls to certain pages are only accepted from authenticated users? For example: Let's say that I have a main page called blog.php (I know, creativity abounds). Let's also say that there is a page called delete.php which looks for the parameter post_id and then deletes some entry from a database. In this very contrived example, there's some mechanism on blog.php which sends a request via ajax to delete.php to delete an entry. Now this mechanism is only going to be available to authenticated users on blog.php. But what's to stop someone from just calling delete.php with a bunch of random numbers and deleting everything in site? I did a quick test where I set a session variable in blog.php and then did an ajax call to delete.php to return if the session variable was set or not (it wasn't). What's the accepted way to handle this sort of thing? OK. I must have been crazy the first time I tried this. I just did another test like the one I described above and it worked perfectly.
You were correct in trying to use session variables. Once your user authenticates, you should store that information in their session so that each subsequent page view will see that. Make sure you are calling session_start() on both pages (blog.php and delete.php) before accessing $_SESSION. Also make sure you have cookies enabled -- and if not, you should pass an additional parameter in the query string, usually PHPSESSID=< session_id() >.
Only accepting certain ajax requests from authenticated users What's the best practice for making sure that certain ajax calls to certain pages are only accepted from authenticated users? For example: Let's say that I have a main page called blog.php (I know, creativity abounds). Let's also say that there is a page called delete.php which looks for the parameter post_id and then deletes some entry from a database. In this very contrived example, there's some mechanism on blog.php which sends a request via ajax to delete.php to delete an entry. Now this mechanism is only going to be available to authenticated users on blog.php. But what's to stop someone from just calling delete.php with a bunch of random numbers and deleting everything in site? I did a quick test where I set a session variable in blog.php and then did an ajax call to delete.php to return if the session variable was set or not (it wasn't). What's the accepted way to handle this sort of thing? OK. I must have been crazy the first time I tried this. I just did another test like the one I described above and it worked perfectly.
TITLE: Only accepting certain ajax requests from authenticated users QUESTION: What's the best practice for making sure that certain ajax calls to certain pages are only accepted from authenticated users? For example: Let's say that I have a main page called blog.php (I know, creativity abounds). Let's also say that there is a page called delete.php which looks for the parameter post_id and then deletes some entry from a database. In this very contrived example, there's some mechanism on blog.php which sends a request via ajax to delete.php to delete an entry. Now this mechanism is only going to be available to authenticated users on blog.php. But what's to stop someone from just calling delete.php with a bunch of random numbers and deleting everything in site? I did a quick test where I set a session variable in blog.php and then did an ajax call to delete.php to return if the session variable was set or not (it wasn't). What's the accepted way to handle this sort of thing? OK. I must have been crazy the first time I tried this. I just did another test like the one I described above and it worked perfectly. ANSWER: You were correct in trying to use session variables. Once your user authenticates, you should store that information in their session so that each subsequent page view will see that. Make sure you are calling session_start() on both pages (blog.php and delete.php) before accessing $_SESSION. Also make sure you have cookies enabled -- and if not, you should pass an additional parameter in the query string, usually PHPSESSID=< session_id() >.
[ "php", "ajax", "authentication" ]
9
7
1,842
2
0
2008-08-18T23:18:02.673000
2008-08-18T23:20:46.600000
15,376
224,568
What's the best UML diagramming tool?
I'm trying to choose a tool for creating UML diagrams of all flavours. Usability is a major criteria for me, but I'd still take more power with a steeper learning curve and be happy. Free (as in beer) would be nice, but I'd be willing to pay if the tool's worth it. What should I be using?
Some context: Recently for graduate school I researched UML tools for usability and UML comprehension in general for an independent project. I also model/architect for a living. The previous posts have too many answers and not enough questions. A common misunderstanding is that UML is about creating diagrams. Sure, diagrams are important, but really you are creating a model. Here are the questions that should be answered as each vendor product/solution does some things better than others. Note: The listed answers are my view as the best even if other products support a given feature or need. Are you modeling or drawing? (Drawing - ArgoUML, free implementations, and Visio ) Will you be modeling in the future? (For basic modeling - Community editions of pay products) Do you want to formalize your modeling through profiles or meta-models? OCL? ( Sparx, RSM, Visual Paradigm ) Are you concerned about model portability, XMI support? ( GenMyModel, Sparx, Visual Paradigm, Altova ) Do you have an existing set of documents that you need to work with? (Depends on the documents) Would you want to generate code stubs or full functioning code?( GenMyModel, Visual Paradigm, Sparx, Altova ) Do you need more mature processes such as use case management, pattern creation, asset creation, RUP integration, etc? (RSA/RSM/IBM Rational Products) Detailed Examples: IBM Rational Software Architect did not implement UML 2.0 all the way when it comes to realizes type relationships when creating a UML profile, but Visual Paradigm and Sparx got it right. Ok, that was way too detailed, so a simpler example would be ArgoUML, which has no code generation features and focuses on drawing more than the modeling aspect of UML. Sparx and Visual Paradigm do UML really well and generate code well, however, hooking into project lifecycles and other process is where RSM/RSA is strong. Watch out for closed or product specific code generation processes or frameworks as you could end up stuck with that product. This is a straight brain dump so a couple details may not be perfect, however, this should provide a general map to the questions and solutions to looking into. NEW - Found a good list of many UML tools with descriptions. Wiki UML Tool List
What's the best UML diagramming tool? I'm trying to choose a tool for creating UML diagrams of all flavours. Usability is a major criteria for me, but I'd still take more power with a steeper learning curve and be happy. Free (as in beer) would be nice, but I'd be willing to pay if the tool's worth it. What should I be using?
TITLE: What's the best UML diagramming tool? QUESTION: I'm trying to choose a tool for creating UML diagrams of all flavours. Usability is a major criteria for me, but I'd still take more power with a steeper learning curve and be happy. Free (as in beer) would be nice, but I'd be willing to pay if the tool's worth it. What should I be using? ANSWER: Some context: Recently for graduate school I researched UML tools for usability and UML comprehension in general for an independent project. I also model/architect for a living. The previous posts have too many answers and not enough questions. A common misunderstanding is that UML is about creating diagrams. Sure, diagrams are important, but really you are creating a model. Here are the questions that should be answered as each vendor product/solution does some things better than others. Note: The listed answers are my view as the best even if other products support a given feature or need. Are you modeling or drawing? (Drawing - ArgoUML, free implementations, and Visio ) Will you be modeling in the future? (For basic modeling - Community editions of pay products) Do you want to formalize your modeling through profiles or meta-models? OCL? ( Sparx, RSM, Visual Paradigm ) Are you concerned about model portability, XMI support? ( GenMyModel, Sparx, Visual Paradigm, Altova ) Do you have an existing set of documents that you need to work with? (Depends on the documents) Would you want to generate code stubs or full functioning code?( GenMyModel, Visual Paradigm, Sparx, Altova ) Do you need more mature processes such as use case management, pattern creation, asset creation, RUP integration, etc? (RSA/RSM/IBM Rational Products) Detailed Examples: IBM Rational Software Architect did not implement UML 2.0 all the way when it comes to realizes type relationships when creating a UML profile, but Visual Paradigm and Sparx got it right. Ok, that was way too detailed, so a simpler example would be ArgoUML, which has no code generation features and focuses on drawing more than the modeling aspect of UML. Sparx and Visual Paradigm do UML really well and generate code well, however, hooking into project lifecycles and other process is where RSM/RSA is strong. Watch out for closed or product specific code generation processes or frameworks as you could end up stuck with that product. This is a straight brain dump so a couple details may not be perfect, however, this should provide a general map to the questions and solutions to looking into. NEW - Found a good list of many UML tools with descriptions. Wiki UML Tool List
[ "uml", "diagram", "modeling" ]
488
353
1,071,384
50
0
2008-08-18T23:29:17.643000
2008-10-22T05:32:27.497000
15,390
15,402
Best practices for managing and deploying large JavaScript apps
What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development. Our JavaScript code is roughly "namespaced" as: var Client = { var1: '', var2: '', accounts: { /* 100's of functions and variables */ }, orders: { /* 100's of functions and variables and subsections */ } /* etc, etc for a couple hundred kb */ } At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download. Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices?
The approach that I've found works for me is having seperate JS files for each class (just as you would in Java, C# and others). Alternatively you can group your JS into application functional areas if that's easier for you to navigate. If you put all your JS files into one directory, you can have your server-side environment (PHP for instance) loop through each file in that directory and output a
Best practices for managing and deploying large JavaScript apps What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development. Our JavaScript code is roughly "namespaced" as: var Client = { var1: '', var2: '', accounts: { /* 100's of functions and variables */ }, orders: { /* 100's of functions and variables and subsections */ } /* etc, etc for a couple hundred kb */ } At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download. Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices?
TITLE: Best practices for managing and deploying large JavaScript apps QUESTION: What are some standard practices for managing a medium-large JavaScript application? My concerns are both speed for browser download and ease and maintainability of development. Our JavaScript code is roughly "namespaced" as: var Client = { var1: '', var2: '', accounts: { /* 100's of functions and variables */ }, orders: { /* 100's of functions and variables and subsections */ } /* etc, etc for a couple hundred kb */ } At the moment, we have one (unpacked, unstripped, highly readable) JavaScript file to handle all the business logic on the web application. In addition, there is jQuery and several jQuery extensions. The problem we face is that it takes forever to find anything in the JavaScript code and the browser still has a dozen files to download. Is it common to have a handful of "source" JavaScript files that gets "compiled" into one final, compressed JavaScript file? Any other handy hints or best practices? ANSWER: The approach that I've found works for me is having seperate JS files for each class (just as you would in Java, C# and others). Alternatively you can group your JS into application functional areas if that's easier for you to navigate. If you put all your JS files into one directory, you can have your server-side environment (PHP for instance) loop through each file in that directory and output a
[ "javascript", "web-applications", "deployment" ]
30
12
10,226
7
0
2008-08-18T23:45:14.553000
2008-08-19T00:02:42.913000
15,412
15,432
Is Single Responsibility Principle a rule of OOP?
An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP). Is the Single Responsibility Principle really a rule of OOP? My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance. Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle. But, is it a rule, and thus implies that it shouldn't be broken?
Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong. As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices. The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort. SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things. Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want. But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
Is Single Responsibility Principle a rule of OOP? An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP). Is the Single Responsibility Principle really a rule of OOP? My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance. Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle. But, is it a rule, and thus implies that it shouldn't be broken?
TITLE: Is Single Responsibility Principle a rule of OOP? QUESTION: An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP). Is the Single Responsibility Principle really a rule of OOP? My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance. Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle. But, is it a rule, and thus implies that it shouldn't be broken? ANSWER: Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong. As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices. The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort. SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things. Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want. But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
[ "oop" ]
21
22
3,790
6
0
2008-08-19T00:11:00.757000
2008-08-19T00:32:41.993000
15,414
15,420
Is it possible to coax Visual Studio 2008 into using italics for comments?
I'm quite used to my IDE setup in Delphi 7, where I have my comments rendered in italics. This doesn't appear to be possible in VS2008. There's only an option for bold font, not italics. Is there some kind of registry hack or font magic I can perform to get it to work? The font I'm using is Consolas, if that makes a difference. Edit: This is possible. See this answer for details. Adam, as far as I can tell, you can't change the font name for just comments - only the colour, and boldness. If I'm wrong, please tell me!
If you have a font editor, you can change an italic font to pretend it's bold. Here's an example of it. (For VS 2005, but it should work all the same.)
Is it possible to coax Visual Studio 2008 into using italics for comments? I'm quite used to my IDE setup in Delphi 7, where I have my comments rendered in italics. This doesn't appear to be possible in VS2008. There's only an option for bold font, not italics. Is there some kind of registry hack or font magic I can perform to get it to work? The font I'm using is Consolas, if that makes a difference. Edit: This is possible. See this answer for details. Adam, as far as I can tell, you can't change the font name for just comments - only the colour, and boldness. If I'm wrong, please tell me!
TITLE: Is it possible to coax Visual Studio 2008 into using italics for comments? QUESTION: I'm quite used to my IDE setup in Delphi 7, where I have my comments rendered in italics. This doesn't appear to be possible in VS2008. There's only an option for bold font, not italics. Is there some kind of registry hack or font magic I can perform to get it to work? The font I'm using is Consolas, if that makes a difference. Edit: This is possible. See this answer for details. Adam, as far as I can tell, you can't change the font name for just comments - only the colour, and boldness. If I'm wrong, please tell me! ANSWER: If you have a font editor, you can change an italic font to pretend it's bold. Here's an example of it. (For VS 2005, but it should work all the same.)
[ "visual-studio", "fonts", "consolas" ]
3
3
2,130
8
0
2008-08-19T00:12:01.943000
2008-08-19T00:18:53.097000
15,423
15,437
How to change the icon of .bat file programmatically?
I'd like to know what's the way to actually set the icon of a.bat file to an arbitrary icon. How would I go about doing that programmatically, independently of the language I may be using.
Assuming you're referring to MS-DOS batch files: as it is simply a text file with a special extension, a.bat file doesn't store an icon of its own. You can, however, create a shortcut in the.lnk format that stores an icon.
How to change the icon of .bat file programmatically? I'd like to know what's the way to actually set the icon of a.bat file to an arbitrary icon. How would I go about doing that programmatically, independently of the language I may be using.
TITLE: How to change the icon of .bat file programmatically? QUESTION: I'd like to know what's the way to actually set the icon of a.bat file to an arbitrary icon. How would I go about doing that programmatically, independently of the language I may be using. ANSWER: Assuming you're referring to MS-DOS batch files: as it is simply a text file with a special extension, a.bat file doesn't store an icon of its own. You can, however, create a shortcut in the.lnk format that stores an icon.
[ "windows", "icons", "batch-file" ]
51
49
262,922
11
0
2008-08-19T00:21:12.237000
2008-08-19T00:37:11.500000
15,470
15,474
How do I get rid of Home in ASP.Net MVC?
I know this site is written using ASP.Net MVC and I do not see "/Home" in the url. This proves to me that it can be done. What special route and do I need?
Just change "Home" to an empty string. routes.MapRoute( "Home", "", new { action = Index, controller = Home } );
How do I get rid of Home in ASP.Net MVC? I know this site is written using ASP.Net MVC and I do not see "/Home" in the url. This proves to me that it can be done. What special route and do I need?
TITLE: How do I get rid of Home in ASP.Net MVC? QUESTION: I know this site is written using ASP.Net MVC and I do not see "/Home" in the url. This proves to me that it can be done. What special route and do I need? ANSWER: Just change "Home" to an empty string. routes.MapRoute( "Home", "", new { action = Index, controller = Home } );
[ "asp.net-mvc", "asp.net-mvc-routing" ]
15
16
4,545
6
0
2008-08-19T01:02:59.030000
2008-08-19T01:07:34.903000
15,481
15,658
Java Coding standard / best practices - naming convention for break/continue labels
Sometimes a labeled break or continue can make code a lot more readable. OUTERLOOP: for (;/*stuff*/; ) { //...lots of code if ( isEnough() ) break OUTERLOOP; //...more code } I was wondering what the common convention for the labels was. All caps? first cap?
If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as "Class" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them.;)
Java Coding standard / best practices - naming convention for break/continue labels Sometimes a labeled break or continue can make code a lot more readable. OUTERLOOP: for (;/*stuff*/; ) { //...lots of code if ( isEnough() ) break OUTERLOOP; //...more code } I was wondering what the common convention for the labels was. All caps? first cap?
TITLE: Java Coding standard / best practices - naming convention for break/continue labels QUESTION: Sometimes a labeled break or continue can make code a lot more readable. OUTERLOOP: for (;/*stuff*/; ) { //...lots of code if ( isEnough() ) break OUTERLOOP; //...more code } I was wondering what the common convention for the labels was. All caps? first cap? ANSWER: If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as "Class" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them.;)
[ "java", "label", "convention" ]
24
20
22,647
10
0
2008-08-19T01:18:29.890000
2008-08-19T03:41:53.530000
15,486
15,494
Sorting an IList in C#
So I came across an interesting problem today. We have a WCF web service that returns an IList. Not really a big deal until I wanted to sort it. Turns out the IList interface doesn't have a sort method built in. I ended up using the ArrayList.Adapter(list).Sort(new MyComparer()) method to solve the problem but it just seemed a bit "ghetto" to me. I toyed with writing an extension method, also with inheriting from IList and implementing my own Sort() method as well as casting to a List but none of these seemed overly elegant. So my question is, does anyone have an elegant solution to sorting an IList
How about using LINQ To Objects to sort for you? Say you have a IList, and the car had an Engine property, I believe you could sort as follows: from c in list orderby c.Engine select c; Edit: You do need to be quick to get answers in here. As I presented a slightly different syntax to the other answers, I will leave my answer - however, the other answers presented are equally valid.
Sorting an IList in C# So I came across an interesting problem today. We have a WCF web service that returns an IList. Not really a big deal until I wanted to sort it. Turns out the IList interface doesn't have a sort method built in. I ended up using the ArrayList.Adapter(list).Sort(new MyComparer()) method to solve the problem but it just seemed a bit "ghetto" to me. I toyed with writing an extension method, also with inheriting from IList and implementing my own Sort() method as well as casting to a List but none of these seemed overly elegant. So my question is, does anyone have an elegant solution to sorting an IList
TITLE: Sorting an IList in C# QUESTION: So I came across an interesting problem today. We have a WCF web service that returns an IList. Not really a big deal until I wanted to sort it. Turns out the IList interface doesn't have a sort method built in. I ended up using the ArrayList.Adapter(list).Sort(new MyComparer()) method to solve the problem but it just seemed a bit "ghetto" to me. I toyed with writing an extension method, also with inheriting from IList and implementing my own Sort() method as well as casting to a List but none of these seemed overly elegant. So my question is, does anyone have an elegant solution to sorting an IList ANSWER: How about using LINQ To Objects to sort for you? Say you have a IList, and the car had an Engine property, I believe you could sort as follows: from c in list orderby c.Engine select c; Edit: You do need to be quick to get answers in here. As I presented a slightly different syntax to the other answers, I will leave my answer - however, the other answers presented are equally valid.
[ "c#", "generics", "sorting", "ilist" ]
97
57
128,875
15
0
2008-08-19T01:23:04.340000
2008-08-19T01:34:06.317000
15,513
18,052
How do I track down performance problems with page rendering?
I've been tasked with improving the performance of an ASP.NET 2.0 application. The page I'm currently focused on has many problems but one that I'm having trouble digging into is the render time of the page. Using Trace.axd the duration between Begin Render and End Render is 1.4 seconds. From MSDN I see that All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser. If I had the source code for all the controls on the page, I would just instrument them to trace out their render time. Unfortunately, this particular page has lots of controls, most of them third-party. Is there tool or technique to get better visibility into what is going on during the render? I would like to know if there is a particularly poorly performing control, or if there are simply too many controls on the page.
Download ANTS PROFILER, this will give you a perfect overview of the lines causing the slowdown. Also when it's about rendering make sure you don't use to much string concats (like string += "value") but use StringBuilders to improve performance.
How do I track down performance problems with page rendering? I've been tasked with improving the performance of an ASP.NET 2.0 application. The page I'm currently focused on has many problems but one that I'm having trouble digging into is the render time of the page. Using Trace.axd the duration between Begin Render and End Render is 1.4 seconds. From MSDN I see that All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser. If I had the source code for all the controls on the page, I would just instrument them to trace out their render time. Unfortunately, this particular page has lots of controls, most of them third-party. Is there tool or technique to get better visibility into what is going on during the render? I would like to know if there is a particularly poorly performing control, or if there are simply too many controls on the page.
TITLE: How do I track down performance problems with page rendering? QUESTION: I've been tasked with improving the performance of an ASP.NET 2.0 application. The page I'm currently focused on has many problems but one that I'm having trouble digging into is the render time of the page. Using Trace.axd the duration between Begin Render and End Render is 1.4 seconds. From MSDN I see that All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser. If I had the source code for all the controls on the page, I would just instrument them to trace out their render time. Unfortunately, this particular page has lots of controls, most of them third-party. Is there tool or technique to get better visibility into what is going on during the render? I would like to know if there is a particularly poorly performing control, or if there are simply too many controls on the page. ANSWER: Download ANTS PROFILER, this will give you a perfect overview of the lines causing the slowdown. Also when it's about rendering make sure you don't use to much string concats (like string += "value") but use StringBuilders to improve performance.
[ "asp.net", "performance" ]
1
2
836
3
0
2008-08-19T01:51:44.113000
2008-08-20T14:08:51.110000
15,514
15,677
Is there anyway to disable the client-side validation for dojo date text box?
In my example below I'm using a dijit.form.DateTextBox: So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying The value entered is not valid.. Even if I remove the constraints="{datePattern:'MM/dd/yyyy'}" it still validates. Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances.
Try overriding the validate method in your markup. This will work (just tested):
Is there anyway to disable the client-side validation for dojo date text box? In my example below I'm using a dijit.form.DateTextBox: So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying The value entered is not valid.. Even if I remove the constraints="{datePattern:'MM/dd/yyyy'}" it still validates. Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances.
TITLE: Is there anyway to disable the client-side validation for dojo date text box? QUESTION: In my example below I'm using a dijit.form.DateTextBox: So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying The value entered is not valid.. Even if I remove the constraints="{datePattern:'MM/dd/yyyy'}" it still validates. Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances. ANSWER: Try overriding the validate method in your markup. This will work (just tested):
[ "javascript", "dojo" ]
4
6
5,453
3
0
2008-08-19T01:51:46.703000
2008-08-19T04:13:01.923000
15,537
15,577
How do I generate ASCII codes 2 and 3 in a Bash command line?
If I press Ctrl + B that ought to give me ASCII code 2, but Ctrl + C is going to be interpreted as a Break. So I figure I've got to redirect a file in. How do I get these characters into a file?
Ctrl - V escapes the next keystoke. That's how you can get a Ctrl - C out: Ctrl - V Ctrl - C
How do I generate ASCII codes 2 and 3 in a Bash command line? If I press Ctrl + B that ought to give me ASCII code 2, but Ctrl + C is going to be interpreted as a Break. So I figure I've got to redirect a file in. How do I get these characters into a file?
TITLE: How do I generate ASCII codes 2 and 3 in a Bash command line? QUESTION: If I press Ctrl + B that ought to give me ASCII code 2, but Ctrl + C is going to be interpreted as a Break. So I figure I've got to redirect a file in. How do I get these characters into a file? ANSWER: Ctrl - V escapes the next keystoke. That's how you can get a Ctrl - C out: Ctrl - V Ctrl - C
[ "linux", "bash", "unix", "ascii" ]
3
10
3,744
3
0
2008-08-19T02:05:55.817000
2008-08-19T02:34:46.677000
15,593
15,627
Practical use of System.WeakReference
I understand what System.WeakReference does, but what I can't seem to grasp is a practical example of what it might be useful for. The class itself seems to me to be, well, a hack. It seems to me that there are other, better means of solving a problem where a WeakReference is used in examples I've seen. What's the canonical example of where you've really got to use a WeakReference? Aren't we trying to get farther away from this type of behavior and use of this class?
One useful example is the guys who run DB4O object oriented database. There, WeakReferences are used as a kind of light cache: it will keep your objects in memory only as long as your application does, allowing you to put a real cache on top. Another use would be in the implementation of weak event handlers. Currently, one big source of memory leaks in.NET applications is forgetting to remove event handlers. E.g. public MyForm() { MyApplication.Foo += someHandler; } See the problem? In the above snippet, MyForm will be kept alive in memory forever as long as MyApplication is alive in memory. Create 10 MyForms, close them all, your 10 MyForms will still be in memory, kept alive by the event handler. Enter WeakReference. You can build a weak event handler using WeakReferences so that someHandler is a weak event handler to MyApplication.Foo, thus fixing your memory leaks! This isn't just theory. Dustin Campbell from the DidItWith.NET blog posted an implementation of weak event handlers using System.WeakReference.
Practical use of System.WeakReference I understand what System.WeakReference does, but what I can't seem to grasp is a practical example of what it might be useful for. The class itself seems to me to be, well, a hack. It seems to me that there are other, better means of solving a problem where a WeakReference is used in examples I've seen. What's the canonical example of where you've really got to use a WeakReference? Aren't we trying to get farther away from this type of behavior and use of this class?
TITLE: Practical use of System.WeakReference QUESTION: I understand what System.WeakReference does, but what I can't seem to grasp is a practical example of what it might be useful for. The class itself seems to me to be, well, a hack. It seems to me that there are other, better means of solving a problem where a WeakReference is used in examples I've seen. What's the canonical example of where you've really got to use a WeakReference? Aren't we trying to get farther away from this type of behavior and use of this class? ANSWER: One useful example is the guys who run DB4O object oriented database. There, WeakReferences are used as a kind of light cache: it will keep your objects in memory only as long as your application does, allowing you to put a real cache on top. Another use would be in the implementation of weak event handlers. Currently, one big source of memory leaks in.NET applications is forgetting to remove event handlers. E.g. public MyForm() { MyApplication.Foo += someHandler; } See the problem? In the above snippet, MyForm will be kept alive in memory forever as long as MyApplication is alive in memory. Create 10 MyForms, close them all, your 10 MyForms will still be in memory, kept alive by the event handler. Enter WeakReference. You can build a weak event handler using WeakReferences so that someHandler is a weak event handler to MyApplication.Foo, thus fixing your memory leaks! This isn't just theory. Dustin Campbell from the DidItWith.NET blog posted an implementation of weak event handlers using System.WeakReference.
[ ".net", "garbage-collection" ]
39
49
7,611
4
0
2008-08-19T02:45:27.747000
2008-08-19T03:07:38.343000
15,621
15,651
Best way to structure a repository in Subversion for Visual Studio projects?
I have a few C#.dll projects which are common to many applications. Currently, I have one big repository. I have each DLL stored as a separate project within the repository and every application project stored as a project within the same repository. I recently switched to Subversion for source control and I fear that I did not do a good job of structuring the repository. I would like to hear what others are doing.
using the branch/trunk/tag repository structure is pretty standard, but if I'm understanding you properly, your issue is that you have a set of common dll projects that get used across multiple projects. This can definately become tricky to manage. So the typical scenario here is that you have some class library called Common.Helpers that has code that is common to all your applications. Let's say I'm starting a new application called StackOverflow.Web that needs to reference Common.Helpers. Usually what you would do is create a new solution file and add a new project called Stackoverflow.Web and add the existing Common.Helpers project and then reference it from the new Stackoverflow.Web project. What I usually try and do is create a repository for the Common.Helpers project and then in subversion reference it as an external. That way you can keep the code under source control in a single location, but still use it seperately in multiple projects.
Best way to structure a repository in Subversion for Visual Studio projects? I have a few C#.dll projects which are common to many applications. Currently, I have one big repository. I have each DLL stored as a separate project within the repository and every application project stored as a project within the same repository. I recently switched to Subversion for source control and I fear that I did not do a good job of structuring the repository. I would like to hear what others are doing.
TITLE: Best way to structure a repository in Subversion for Visual Studio projects? QUESTION: I have a few C#.dll projects which are common to many applications. Currently, I have one big repository. I have each DLL stored as a separate project within the repository and every application project stored as a project within the same repository. I recently switched to Subversion for source control and I fear that I did not do a good job of structuring the repository. I would like to hear what others are doing. ANSWER: using the branch/trunk/tag repository structure is pretty standard, but if I'm understanding you properly, your issue is that you have a set of common dll projects that get used across multiple projects. This can definately become tricky to manage. So the typical scenario here is that you have some class library called Common.Helpers that has code that is common to all your applications. Let's say I'm starting a new application called StackOverflow.Web that needs to reference Common.Helpers. Usually what you would do is create a new solution file and add a new project called Stackoverflow.Web and add the existing Common.Helpers project and then reference it from the new Stackoverflow.Web project. What I usually try and do is create a repository for the Common.Helpers project and then in subversion reference it as an external. That way you can keep the code under source control in a single location, but still use it seperately in multiple projects.
[ "svn" ]
12
4
1,868
6
0
2008-08-19T03:03:30.973000
2008-08-19T03:33:39.977000
15,649
15,667
Can you really build a fast word processor with GoF Design Patterns?
The Gang of Four's Design Patterns uses a word processor as an example for at least a few of their patterns, particularly Composite and Flyweight. Other than by using C or C++, could you really use those patterns and the object-oriented overhead they entail to write a high-performing fully featured word processor? I know that Eclipse is written in Java but I haven't used it much so I don't know if it's all that fast or as polished as something like Visual Studio, which has a C++ based text editing system. I only used C++ and Java as examples. The question has more to do with the overhead of having a lot of in-memory objects like you would in an application such as a word processor or even a game. Design patterns promote abstraction at the expense of parsimony even though they usually point out when you might take some kind of performance hit. Word processors and especially games get the most benefit from being as close to the metal as possible. I was just wondering if anyone knew of an fast object-oriented word processor or text editor that wasn't written in C++, and whether they'd build one using patterns or would they forgoe a lot of the abstracting away of things?
Flyweight really is just a way of conserving resources in situations where there are thousands of objects with intrinsic shared state, so it could be useful in higher level languages than C/C++. Maybe the GoF's example using glyphs in a document was not the best choice to illustrate this pattern. I think there's a lot more to building a high performance word processor than just these basic patterns though - not sure whether there is anything in GoF that rules out being able to successfully do this. Generally, Visual Studio (VS) is more advanced and performs significantly better than Eclipse - at least, the versions of VS that I have seen. Eclipse is one of the most impressive Java applications out there though, it runs quite well on more recent machines with lots of RAM.
Can you really build a fast word processor with GoF Design Patterns? The Gang of Four's Design Patterns uses a word processor as an example for at least a few of their patterns, particularly Composite and Flyweight. Other than by using C or C++, could you really use those patterns and the object-oriented overhead they entail to write a high-performing fully featured word processor? I know that Eclipse is written in Java but I haven't used it much so I don't know if it's all that fast or as polished as something like Visual Studio, which has a C++ based text editing system. I only used C++ and Java as examples. The question has more to do with the overhead of having a lot of in-memory objects like you would in an application such as a word processor or even a game. Design patterns promote abstraction at the expense of parsimony even though they usually point out when you might take some kind of performance hit. Word processors and especially games get the most benefit from being as close to the metal as possible. I was just wondering if anyone knew of an fast object-oriented word processor or text editor that wasn't written in C++, and whether they'd build one using patterns or would they forgoe a lot of the abstracting away of things?
TITLE: Can you really build a fast word processor with GoF Design Patterns? QUESTION: The Gang of Four's Design Patterns uses a word processor as an example for at least a few of their patterns, particularly Composite and Flyweight. Other than by using C or C++, could you really use those patterns and the object-oriented overhead they entail to write a high-performing fully featured word processor? I know that Eclipse is written in Java but I haven't used it much so I don't know if it's all that fast or as polished as something like Visual Studio, which has a C++ based text editing system. I only used C++ and Java as examples. The question has more to do with the overhead of having a lot of in-memory objects like you would in an application such as a word processor or even a game. Design patterns promote abstraction at the expense of parsimony even though they usually point out when you might take some kind of performance hit. Word processors and especially games get the most benefit from being as close to the metal as possible. I was just wondering if anyone knew of an fast object-oriented word processor or text editor that wasn't written in C++, and whether they'd build one using patterns or would they forgoe a lot of the abstracting away of things? ANSWER: Flyweight really is just a way of conserving resources in situations where there are thousands of objects with intrinsic shared state, so it could be useful in higher level languages than C/C++. Maybe the GoF's example using glyphs in a document was not the best choice to illustrate this pattern. I think there's a lot more to building a high performance word processor than just these basic patterns though - not sure whether there is anything in GoF that rules out being able to successfully do this. Generally, Visual Studio (VS) is more advanced and performs significantly better than Eclipse - at least, the versions of VS that I have seen. Eclipse is one of the most impressive Java applications out there though, it runs quite well on more recent machines with lots of RAM.
[ "java", "performance", "design-patterns", "ide", "text-processing" ]
7
7
3,671
7
0
2008-08-19T03:31:27.750000
2008-08-19T03:54:14.223000
15,674
230,005
Subversion revision number across multiple projects
When using Subversion (svn) for source control with multiple projects I've noticed that the revision number increases across all of my projects' directories. To illustrate my svn layout (using fictitious project names): /NinjaProg/branches /tags /trunk /StealthApp/branches /tags /trunk /SnailApp/branches /tags /trunk When I perform a commit to the trunk of the Ninja Program, let's say I get that it has been updated to revision 7. The next day let's say that I make a small change to the Stealth Application and it comes back as revision 8. The question is this: Is it common accepted practice to, when maintaining multiple projects with one Subversion server, to have unrelated projects' revision number increase across all projects? Or am I doing it wrong and should be creating individual repositories for each project? Or is it something else entirely? EDIT: I delayed in flagging an answer because it had become clear that there are reasons for both approaches, and even though this question came first, I'd like to point to some other questions that are ultimately asking the same question: Should I store all projects in one repository or mulitiple? One SVN Repository or many?
I am surprised no has mentioned that this is discussed in Version Control with Subversion, which is available free online, here. I read up on the issue awhile back and it really seems like a matter of personal choice, there is a good blog post on the subject here. EDIT: Since the blog appears to be down, ( archived version here ), here is some of what Mark Phippard had to say on the subject. These are some of the advantages of the single repository approach. Simplified administration. One set of hooks to deploy. One repository to backup. etc. Branch/tag flexibility. With the code all in one repository it makes it easier to create a branch or tag involving multiple projects. Move code easily. Perhaps you want to take a section of code from one project and use it in another, or turn it into a library for several projects. It is easy to move the code within the same repository and retain the history of the code in the process. Here are some of the drawbacks to the single repository approach, advantages to the multiple repository approach. Size. It might be easier to deal with many smaller repositories than one large one. For example, if you retire a project you can just archive the repository to media and remove it from the disk and free up the storage. Maybe you need to dump/load a repository for some reason, such as to take advantage of a new Subversion feature. This is easier to do and with less impact if it is a smaller repository. Even if you eventually want to do it to all of your repositories, it will have less impact to do them one at a time, assuming there is not a pressing need to do them all at once. Global revision number. Even though this should not be an issue, some people perceive it to be one and do not like to see the revision number advance on the repository and for inactive projects to have large gaps in their revision history. Access control. While Subversion's authz mechanism allows you to restrict access as needed to parts of the repository, it is still easier to do this at the repository level. If you have a project that only a select few individuals should access, this is easier to do with a single repository for that project. Administrative flexibility. If you have multiple repositories, then it is easier to implement different hook scripts based on the needs of the repository/projects. If you want uniform hook scripts, then a single repository might be better, but if each project wants its own commit email style then it is easier to have those projects in separate repositories When you really think about, the revision numbers in a multiple project repository are going to get high, but you are not going to run out. Keep in mind that you can view a history on a sub directory and quickly see all the revision numbers that pertain to a project.
Subversion revision number across multiple projects When using Subversion (svn) for source control with multiple projects I've noticed that the revision number increases across all of my projects' directories. To illustrate my svn layout (using fictitious project names): /NinjaProg/branches /tags /trunk /StealthApp/branches /tags /trunk /SnailApp/branches /tags /trunk When I perform a commit to the trunk of the Ninja Program, let's say I get that it has been updated to revision 7. The next day let's say that I make a small change to the Stealth Application and it comes back as revision 8. The question is this: Is it common accepted practice to, when maintaining multiple projects with one Subversion server, to have unrelated projects' revision number increase across all projects? Or am I doing it wrong and should be creating individual repositories for each project? Or is it something else entirely? EDIT: I delayed in flagging an answer because it had become clear that there are reasons for both approaches, and even though this question came first, I'd like to point to some other questions that are ultimately asking the same question: Should I store all projects in one repository or mulitiple? One SVN Repository or many?
TITLE: Subversion revision number across multiple projects QUESTION: When using Subversion (svn) for source control with multiple projects I've noticed that the revision number increases across all of my projects' directories. To illustrate my svn layout (using fictitious project names): /NinjaProg/branches /tags /trunk /StealthApp/branches /tags /trunk /SnailApp/branches /tags /trunk When I perform a commit to the trunk of the Ninja Program, let's say I get that it has been updated to revision 7. The next day let's say that I make a small change to the Stealth Application and it comes back as revision 8. The question is this: Is it common accepted practice to, when maintaining multiple projects with one Subversion server, to have unrelated projects' revision number increase across all projects? Or am I doing it wrong and should be creating individual repositories for each project? Or is it something else entirely? EDIT: I delayed in flagging an answer because it had become clear that there are reasons for both approaches, and even though this question came first, I'd like to point to some other questions that are ultimately asking the same question: Should I store all projects in one repository or mulitiple? One SVN Repository or many? ANSWER: I am surprised no has mentioned that this is discussed in Version Control with Subversion, which is available free online, here. I read up on the issue awhile back and it really seems like a matter of personal choice, there is a good blog post on the subject here. EDIT: Since the blog appears to be down, ( archived version here ), here is some of what Mark Phippard had to say on the subject. These are some of the advantages of the single repository approach. Simplified administration. One set of hooks to deploy. One repository to backup. etc. Branch/tag flexibility. With the code all in one repository it makes it easier to create a branch or tag involving multiple projects. Move code easily. Perhaps you want to take a section of code from one project and use it in another, or turn it into a library for several projects. It is easy to move the code within the same repository and retain the history of the code in the process. Here are some of the drawbacks to the single repository approach, advantages to the multiple repository approach. Size. It might be easier to deal with many smaller repositories than one large one. For example, if you retire a project you can just archive the repository to media and remove it from the disk and free up the storage. Maybe you need to dump/load a repository for some reason, such as to take advantage of a new Subversion feature. This is easier to do and with less impact if it is a smaller repository. Even if you eventually want to do it to all of your repositories, it will have less impact to do them one at a time, assuming there is not a pressing need to do them all at once. Global revision number. Even though this should not be an issue, some people perceive it to be one and do not like to see the revision number advance on the repository and for inactive projects to have large gaps in their revision history. Access control. While Subversion's authz mechanism allows you to restrict access as needed to parts of the repository, it is still easier to do this at the repository level. If you have a project that only a select few individuals should access, this is easier to do with a single repository for that project. Administrative flexibility. If you have multiple repositories, then it is easier to implement different hook scripts based on the needs of the repository/projects. If you want uniform hook scripts, then a single repository might be better, but if each project wants its own commit email style then it is easier to have those projects in separate repositories When you really think about, the revision numbers in a multiple project repository are going to get high, but you are not going to run out. Keep in mind that you can view a history on a sub directory and quickly see all the revision numbers that pertain to a project.
[ "svn", "version-control", "repository" ]
24
10
11,786
17
0
2008-08-19T04:08:12.230000
2008-10-23T14:42:40.787000
15,678
15,724
Can I configure VisualStudio 2008 to always build the startup project?
I have a solution with several projects, where the startup project has a post-build event that does all the copying of "plugin" projects and other organizing tasks. After upgrading the solution from VS 2005 to VS 2008, it appears as though the post-build event only fires if I modify the startup project, which means my updated plugins don't get plugged in to the current debugging session. This makes sense, but it seems like a change in behavior. Is anyone else noticing a change in behavior with regard to which projects get built? Does anyone know of a workaround that I can use to force the startup project to rebuild whenever I hit F5? Perhaps I configured VS 2005 to work this way so long ago that I've forgotten all about it...
Why not just add a dependency to the "startup" project for each of the plugins? This will force the project to be rebuilt if any of the others change, and you won't have to mess with any other pre/post-build events.
Can I configure VisualStudio 2008 to always build the startup project? I have a solution with several projects, where the startup project has a post-build event that does all the copying of "plugin" projects and other organizing tasks. After upgrading the solution from VS 2005 to VS 2008, it appears as though the post-build event only fires if I modify the startup project, which means my updated plugins don't get plugged in to the current debugging session. This makes sense, but it seems like a change in behavior. Is anyone else noticing a change in behavior with regard to which projects get built? Does anyone know of a workaround that I can use to force the startup project to rebuild whenever I hit F5? Perhaps I configured VS 2005 to work this way so long ago that I've forgotten all about it...
TITLE: Can I configure VisualStudio 2008 to always build the startup project? QUESTION: I have a solution with several projects, where the startup project has a post-build event that does all the copying of "plugin" projects and other organizing tasks. After upgrading the solution from VS 2005 to VS 2008, it appears as though the post-build event only fires if I modify the startup project, which means my updated plugins don't get plugged in to the current debugging session. This makes sense, but it seems like a change in behavior. Is anyone else noticing a change in behavior with regard to which projects get built? Does anyone know of a workaround that I can use to force the startup project to rebuild whenever I hit F5? Perhaps I configured VS 2005 to work this way so long ago that I've forgotten all about it... ANSWER: Why not just add a dependency to the "startup" project for each of the plugins? This will force the project to be rebuilt if any of the others change, and you won't have to mess with any other pre/post-build events.
[ "visual-studio" ]
2
3
2,487
7
0
2008-08-19T04:13:07.510000
2008-08-19T04:51:16.387000
15,681
15,697
Developer Setup for Starting Out with Cocoa/Mac Programming
I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly? I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this." I was afraid Xcode was going to be the answer!:P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working... wrong. @Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help.
I'd suggest you pick a fun little product and dive in. If you're looking for a book I'd suggest Cocoa Programming for Max OSX which is a very good introduction both to Objective-C and Cocoa. XCode is pretty much the de facto IDE and free with OSX. It should be on your original install DVD. It's good but not as good as Visual Studio (sorry, it's really not). As a long-time VS user I found the default XCode config a little odd and hard to adjust to, particularly the way a new floating window would open for every sourcefile. Some tweaks I found particularly helpful; Settings/General -> All-In-One (unifies editor/debugger window) Settings/General -> Open counterparts in same editor (single-window edit) Settings/Debugging - "In Editor Debugger Controls" Settings/Debugging - "Auto Clear Debug Console" Settings/Key-binding - lots of binding to match VS (Ctrl+F5/Shift+F5,Shift+Home, Shift+End etc) I find the debugger has some annoying issues such as breakpoints not correctly mapping to lines and exceptions aren't immediately trapped by the debugger. Nothing deal-breaking but a bit cumbersome. I would recommend that you make use of the new property syntax that was introduced for Objective-C 2.0. They make for a heck of a lot less typing in many many places. They're limited to OSX 10.5 only though (yeah, language features are tied to OS versions which is a bit odd). Also don't be fooled into downplaying the differences between C/C++ and Objective-C. They're very much related but ARE different languages. Try and start Objective-C without thinking about how you'd do X,Y,Z in C/C++. It'll make it a lot easier.
Developer Setup for Starting Out with Cocoa/Mac Programming I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly? I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this." I was afraid Xcode was going to be the answer!:P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working... wrong. @Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help.
TITLE: Developer Setup for Starting Out with Cocoa/Mac Programming QUESTION: I'd like to start experimenting with Cocoa and programming for Mac OSX. I'm not terribly concerned with Objective C syntax/constructs/bheaviors at this point, but more curious as to an efficient setup on in terms of an editor and/or IDE that will get me going quickly. Is there any IDE even remotely similar to Visual Studio (since that's where I've spent most of my time over the last 7 years) in terms of it's solution/project concept? Any other tools, tips, suggestions and/or resources to get up and experimenting quickly? I'd like to avoid a lot of the intro stuff and get into things like "If you want to create an Mac desktop application, you can use Acme IDE and set up your project like this." I was afraid Xcode was going to be the answer!:P I tried playing around with that -- literally just getting it off the DVD and just diving in with no idea what to expect (before I even knew that you used Objective C as the language). I figured, the wise guy that I am, that I could just sort of fumble around and get a simple app working... wrong. @Andrew - Thanks for the insight on those config settings. Based on my Xcode first impression, I think those may help. ANSWER: I'd suggest you pick a fun little product and dive in. If you're looking for a book I'd suggest Cocoa Programming for Max OSX which is a very good introduction both to Objective-C and Cocoa. XCode is pretty much the de facto IDE and free with OSX. It should be on your original install DVD. It's good but not as good as Visual Studio (sorry, it's really not). As a long-time VS user I found the default XCode config a little odd and hard to adjust to, particularly the way a new floating window would open for every sourcefile. Some tweaks I found particularly helpful; Settings/General -> All-In-One (unifies editor/debugger window) Settings/General -> Open counterparts in same editor (single-window edit) Settings/Debugging - "In Editor Debugger Controls" Settings/Debugging - "Auto Clear Debug Console" Settings/Key-binding - lots of binding to match VS (Ctrl+F5/Shift+F5,Shift+Home, Shift+End etc) I find the debugger has some annoying issues such as breakpoints not correctly mapping to lines and exceptions aren't immediately trapped by the debugger. Nothing deal-breaking but a bit cumbersome. I would recommend that you make use of the new property syntax that was introduced for Objective-C 2.0. They make for a heck of a lot less typing in many many places. They're limited to OSX 10.5 only though (yeah, language features are tied to OS versions which is a bit odd). Also don't be fooled into downplaying the differences between C/C++ and Objective-C. They're very much related but ARE different languages. Try and start Objective-C without thinking about how you'd do X,Y,Z in C/C++. It'll make it a lot easier.
[ "cocoa", "macos" ]
25
18
3,077
11
0
2008-08-19T04:15:52.993000
2008-08-19T04:30:35.897000
15,687
16,020
How can I convert all line endings to CRLF, LF, or CR during SVN operations
So, you are all ready to do a big SVN Commit and it bombs because you have inconsistent line endings in some of your files. Fun part is, you're looking at 1,000s of files spanning dozens of folders of different depths. What do you do?
I don't think the pre-commit hook can actually change the data that is being committed - it can disallow a commit, but I don't think it can do the conversion for you. It sounds like you want the property 'svn:eol-style' set to 'native' - this will automatically convert newlines to whatever is used on your platform (use 'CRLF', 'CR' or 'LF' to get those regardless of what the OS wants). You can use auto-properties so that all future files you create will have this property set (auto props are handled client-side, so you'd have to set this up for each user).
How can I convert all line endings to CRLF, LF, or CR during SVN operations So, you are all ready to do a big SVN Commit and it bombs because you have inconsistent line endings in some of your files. Fun part is, you're looking at 1,000s of files spanning dozens of folders of different depths. What do you do?
TITLE: How can I convert all line endings to CRLF, LF, or CR during SVN operations QUESTION: So, you are all ready to do a big SVN Commit and it bombs because you have inconsistent line endings in some of your files. Fun part is, you're looking at 1,000s of files spanning dozens of folders of different depths. What do you do? ANSWER: I don't think the pre-commit hook can actually change the data that is being committed - it can disallow a commit, but I don't think it can do the conversion for you. It sounds like you want the property 'svn:eol-style' set to 'native' - this will automatically convert newlines to whatever is used on your platform (use 'CRLF', 'CR' or 'LF' to get those regardless of what the OS wants). You can use auto-properties so that all future files you create will have this property set (auto props are handled client-side, so you'd have to set this up for each user).
[ "svn", "version-control", "eol" ]
24
32
20,276
5
0
2008-08-19T04:22:34.057000
2008-08-19T11:35:09.270000
15,690
15,702
How do you begin designing a large system?
It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema. I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec? A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue.
Do you know much about OOP? If so, look into Spring and Hibernate to keep your implementation clean and orthogonal. If you get that, you should find TDD a good way to keep your design compact and lean, especially since you have "automated testing" up and running. UPDATE: Looking at the first slew of answers, I couldn't disagree more. Particularly in the Java space, you should find plenty of mentors/resources on working out your application with Objects, not a database-centric approach. Database design is typically the first step for Microsoft folks (which I do daily, but am in a recovery program, er, Alt.Net). If you keep the focus on what you need to deliver to a customer and let your ORM figure out how to persist your objects, your design should be better.
How do you begin designing a large system? It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema. I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec? A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue.
TITLE: How do you begin designing a large system? QUESTION: It's been mentioned to me that I'll be the sole developer behind a large new system. Among other things I'll be designing a UI and database schema. I'm sure I'll receive some guidance, but I'd like to be able to knock their socks off. What can I do in the meantime to prepare, and what will I need to keep in mind when I sit down at my computer with the spec? A few things to keep in mind: I'm a college student at my first real programming job. I'll be using Java. We already have SCM set up with automated testing, etc...so tools are not an issue. ANSWER: Do you know much about OOP? If so, look into Spring and Hibernate to keep your implementation clean and orthogonal. If you get that, you should find TDD a good way to keep your design compact and lean, especially since you have "automated testing" up and running. UPDATE: Looking at the first slew of answers, I couldn't disagree more. Particularly in the Java space, you should find plenty of mentors/resources on working out your application with Objects, not a database-centric approach. Database design is typically the first step for Microsoft folks (which I do daily, but am in a recovery program, er, Alt.Net). If you keep the focus on what you need to deliver to a customer and let your ORM figure out how to persist your objects, your design should be better.
[ "java", "oop", "architecture" ]
10
11
5,194
10
0
2008-08-19T04:26:33.347000
2008-08-19T04:32:30.200000
15,694
15,746
Is the .NET Client Profile worth targeting?
I've recently been looking into targeting the.NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations: Windows XP SP2+ Windows Server 2003 Edit: Appears the Client Profile will not install on Windows Server 2003. In addition, the client profile is not valid for x64 or ia64 editions; and will also not install if any previous version of the.NET Framework has been installed. I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the.NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the.NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically. Has anyone else determined if it is worth the extra effort to target these specific users? Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration. Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+.
Ultimately, it will not hurt any users if you target the Client Profile. This is because the client profile is a subset of the.net framework v3.5 sp1, and if v3.5 sp1 is already installed you don't need to install anything. The assemblies in the client profile are the same binaries as the full framework, so unless you're loading assemblies dynamically, then you shouldn't need to do any additional testing. My thinking is that unless you must use assemblies which are NOT in the client profile, then you should target it. As for the OS requirements, WPF won't run on pre-XP sp2, so if you need to run on other OSes, then you'll have to use WinForms anyways. EDIT: On IE, yes. It sends the.NET Framework version as part of the UA string, e.g.: Actually so does FF3+3.5sp1: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 (.NET CLR 3.5.30729)
Is the .NET Client Profile worth targeting? I've recently been looking into targeting the.NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations: Windows XP SP2+ Windows Server 2003 Edit: Appears the Client Profile will not install on Windows Server 2003. In addition, the client profile is not valid for x64 or ia64 editions; and will also not install if any previous version of the.NET Framework has been installed. I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the.NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the.NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically. Has anyone else determined if it is worth the extra effort to target these specific users? Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration. Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+.
TITLE: Is the .NET Client Profile worth targeting? QUESTION: I've recently been looking into targeting the.NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations: Windows XP SP2+ Windows Server 2003 Edit: Appears the Client Profile will not install on Windows Server 2003. In addition, the client profile is not valid for x64 or ia64 editions; and will also not install if any previous version of the.NET Framework has been installed. I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the.NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the.NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically. Has anyone else determined if it is worth the extra effort to target these specific users? Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration. Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+. ANSWER: Ultimately, it will not hurt any users if you target the Client Profile. This is because the client profile is a subset of the.net framework v3.5 sp1, and if v3.5 sp1 is already installed you don't need to install anything. The assemblies in the client profile are the same binaries as the full framework, so unless you're loading assemblies dynamically, then you shouldn't need to do any additional testing. My thinking is that unless you must use assemblies which are NOT in the client profile, then you should target it. As for the OS requirements, WPF won't run on pre-XP sp2, so if you need to run on other OSes, then you'll have to use WinForms anyways. EDIT: On IE, yes. It sends the.NET Framework version as part of the UA string, e.g.: Actually so does FF3+3.5sp1: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1 (.NET CLR 3.5.30729)
[ ".net", ".net-client-profile" ]
8
5
880
3
0
2008-08-19T04:28:54.310000
2008-08-19T05:16:56.537000
15,695
16,385
Good Stripes tutorials / examples?
The company I just started working for is using Stripes for parts of its web page development these days, and while it seems to be a nice enough web framework it no one really uses it-- it is almost non existent on the 'net. It's not even first in it's google search and the result you do get is for its old home page. So, do any of you people use Stripes? Of your own volition? Do you know of any good tutorials / examples?
I've never used (or even heard of) Stripes. Regardless, there's a book from The Pragmatic Bookshelf called Stripes:...and Java web development is fun again that may be worth checking out. You could also check out the Stripes mailing list archive.
Good Stripes tutorials / examples? The company I just started working for is using Stripes for parts of its web page development these days, and while it seems to be a nice enough web framework it no one really uses it-- it is almost non existent on the 'net. It's not even first in it's google search and the result you do get is for its old home page. So, do any of you people use Stripes? Of your own volition? Do you know of any good tutorials / examples?
TITLE: Good Stripes tutorials / examples? QUESTION: The company I just started working for is using Stripes for parts of its web page development these days, and while it seems to be a nice enough web framework it no one really uses it-- it is almost non existent on the 'net. It's not even first in it's google search and the result you do get is for its old home page. So, do any of you people use Stripes? Of your own volition? Do you know of any good tutorials / examples? ANSWER: I've never used (or even heard of) Stripes. Regardless, there's a book from The Pragmatic Bookshelf called Stripes:...and Java web development is fun again that may be worth checking out. You could also check out the Stripes mailing list archive.
[ "java", "stripes" ]
8
4
8,192
5
0
2008-08-19T04:29:40.283000
2008-08-19T15:19:16.040000
15,708
15,722
How can I determine whether a specific file is open in Windows?
One of my favourite tools for linux is lsof - a real swiss army knife! Today I found myself wondering which programs on a WinXP system had a specific file open. Is there any equivalent utility to lsof? Additionally, the file in question was over a network share so I'm not sure if that complicates matters.
Use Process Explorer from the Sysinternals Suite, the Find Handle or DLL function will let you search for the process with that file open.
How can I determine whether a specific file is open in Windows? One of my favourite tools for linux is lsof - a real swiss army knife! Today I found myself wondering which programs on a WinXP system had a specific file open. Is there any equivalent utility to lsof? Additionally, the file in question was over a network share so I'm not sure if that complicates matters.
TITLE: How can I determine whether a specific file is open in Windows? QUESTION: One of my favourite tools for linux is lsof - a real swiss army knife! Today I found myself wondering which programs on a WinXP system had a specific file open. Is there any equivalent utility to lsof? Additionally, the file in question was over a network share so I'm not sure if that complicates matters. ANSWER: Use Process Explorer from the Sysinternals Suite, the Find Handle or DLL function will let you search for the process with that file open.
[ "windows", "linux", "command-line", "filesystems" ]
106
88
164,329
10
0
2008-08-19T04:37:35.243000
2008-08-19T04:48:32.587000
15,716
15,803
Design problems with .Net UserControl
I have created a UserControl that has a ListView in it. The ListView is publicly accessible though a property. When I put the UserControl in a form and try to design the ListView though the property, the ListView stays that way until I compile again and it reverts back to the default state. How do I get my design changes to stick for the ListView?
You need to decorate the ListView property with the DesignerSerializationVisibility attribute, like so: [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public ListView MyListView { get { return this.listView1; } } This tells the designer's code generator to output code for it.
Design problems with .Net UserControl I have created a UserControl that has a ListView in it. The ListView is publicly accessible though a property. When I put the UserControl in a form and try to design the ListView though the property, the ListView stays that way until I compile again and it reverts back to the default state. How do I get my design changes to stick for the ListView?
TITLE: Design problems with .Net UserControl QUESTION: I have created a UserControl that has a ListView in it. The ListView is publicly accessible though a property. When I put the UserControl in a form and try to design the ListView though the property, the ListView stays that way until I compile again and it reverts back to the default state. How do I get my design changes to stick for the ListView? ANSWER: You need to decorate the ListView property with the DesignerSerializationVisibility attribute, like so: [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public ListView MyListView { get { return this.listView1; } } This tells the designer's code generator to output code for it.
[ "c#", "user-controls", ".net-2.0" ]
3
6
868
3
0
2008-08-19T04:42:31.910000
2008-08-19T06:28:41.603000
15,731
26,865
Looking for a specific FireFox extension / program for Form posting
I am looking for either a FireFox extension, or a similar program, that allows you to craft GET and POST requests. The user would put in a form action, and as many form key/value pairs as desired. It would also send any cookie information (or send the current cookies from any domain the user chooses.) The Web Developer add-on is almost what I'm looking for; It let's you quickly see the form keys, but it doesn't let you change them or add new ones (which leads to a lot of painful JavaScript in the address bar...)
Actually I think Poster is what you're looking for. A Screen shot of an older Poster version
Looking for a specific FireFox extension / program for Form posting I am looking for either a FireFox extension, or a similar program, that allows you to craft GET and POST requests. The user would put in a form action, and as many form key/value pairs as desired. It would also send any cookie information (or send the current cookies from any domain the user chooses.) The Web Developer add-on is almost what I'm looking for; It let's you quickly see the form keys, but it doesn't let you change them or add new ones (which leads to a lot of painful JavaScript in the address bar...)
TITLE: Looking for a specific FireFox extension / program for Form posting QUESTION: I am looking for either a FireFox extension, or a similar program, that allows you to craft GET and POST requests. The user would put in a form action, and as many form key/value pairs as desired. It would also send any cookie information (or send the current cookies from any domain the user chooses.) The Web Developer add-on is almost what I'm looking for; It let's you quickly see the form keys, but it doesn't let you change them or add new ones (which leads to a lot of painful JavaScript in the address bar...) ANSWER: Actually I think Poster is what you're looking for. A Screen shot of an older Poster version
[ "javascript", "html" ]
4
4
1,133
8
0
2008-08-19T04:59:03.533000
2008-08-25T20:53:56.150000
15,732
16,054
How to validate an XML file against an XSD file?
I'm generating some xml files that needs to conform to an xsd file that was given to me. How should I verify they conform?
The Java runtime library supports validation. Last time I checked this was the Apache Xerces parser under the covers. You should probably use a javax.xml.validation.Validator. import javax.xml.XMLConstants; import javax.xml.transform.Source; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.*; import java.net.URL; import org.xml.sax.SAXException; //import java.io.File; // if you use File import java.io.IOException;... URL schemaFile = new URL("http://host:port/filename.xsd"); // webapp example xsd: // URL schemaFile = new URL("http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"); // local file example: // File schemaFile = new File("/location/to/localfile.xsd"); // etc. Source xmlFile = new StreamSource(new File("web.xml")); SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); try { Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); validator.validate(xmlFile); System.out.println(xmlFile.getSystemId() + " is valid"); } catch (SAXException e) { System.out.println(xmlFile.getSystemId() + " is NOT valid reason:" + e); } catch (IOException e) {} The schema factory constant is the string http://www.w3.org/2001/XMLSchema which defines XSDs. The above code validates a WAR deployment descriptor against the URL http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd but you could just as easily validate against a local file. You should not use the DOMParser to validate a document (unless your goal is to create a document object model anyway). This will start creating DOM objects as it parses the document - wasteful if you aren't going to use them.
How to validate an XML file against an XSD file? I'm generating some xml files that needs to conform to an xsd file that was given to me. How should I verify they conform?
TITLE: How to validate an XML file against an XSD file? QUESTION: I'm generating some xml files that needs to conform to an xsd file that was given to me. How should I verify they conform? ANSWER: The Java runtime library supports validation. Last time I checked this was the Apache Xerces parser under the covers. You should probably use a javax.xml.validation.Validator. import javax.xml.XMLConstants; import javax.xml.transform.Source; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.*; import java.net.URL; import org.xml.sax.SAXException; //import java.io.File; // if you use File import java.io.IOException;... URL schemaFile = new URL("http://host:port/filename.xsd"); // webapp example xsd: // URL schemaFile = new URL("http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"); // local file example: // File schemaFile = new File("/location/to/localfile.xsd"); // etc. Source xmlFile = new StreamSource(new File("web.xml")); SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); try { Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); validator.validate(xmlFile); System.out.println(xmlFile.getSystemId() + " is valid"); } catch (SAXException e) { System.out.println(xmlFile.getSystemId() + " is NOT valid reason:" + e); } catch (IOException e) {} The schema factory constant is the string http://www.w3.org/2001/XMLSchema which defines XSDs. The above code validates a WAR deployment descriptor against the URL http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd but you could just as easily validate against a local file. You should not use the DOMParser to validate a document (unless your goal is to create a document object model anyway). This will start creating DOM objects as it parses the document - wasteful if you aren't going to use them.
[ "java", "xml", "validation", "xsd" ]
287
355
344,221
13
0
2008-08-19T04:59:04.750000
2008-08-19T12:21:16.883000
15,734
775,195
Google Analytics Access with C#
I know that there is no official API for Google Analytics but is there a way to access Google Analytics Reports with C#?
Update: Google launched a Google Analytics API today. Google Analytics Blog - API Launched
Google Analytics Access with C# I know that there is no official API for Google Analytics but is there a way to access Google Analytics Reports with C#?
TITLE: Google Analytics Access with C# QUESTION: I know that there is no official API for Google Analytics but is there a way to access Google Analytics Reports with C#? ANSWER: Update: Google launched a Google Analytics API today. Google Analytics Blog - API Launched
[ "c#", "google-analytics", "google-analytics-api" ]
15
15
14,610
9
0
2008-08-19T05:01:52.183000
2009-04-22T00:00:25.427000
15,744
15,758
How do you use #define?
I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation? Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement?
In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define / #if s in C# is for adding optional debug only code. For example: static void Main(string[] args) { #if DEBUG //this only compiles if in DEBUG Console.WriteLine("DEBUG") #endif #if!DEBUG //this only compiles if not in DEBUG Console.WriteLine("RELEASE") #endif //This always compiles Console.ReadLine() }
How do you use #define? I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation? Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement?
TITLE: How do you use #define? QUESTION: I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation? Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement? ANSWER: In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define / #if s in C# is for adding optional debug only code. For example: static void Main(string[] args) { #if DEBUG //this only compiles if in DEBUG Console.WriteLine("DEBUG") #endif #if!DEBUG //this only compiles if not in DEBUG Console.WriteLine("RELEASE") #endif //This always compiles Console.ReadLine() }
[ "c#" ]
18
14
44,581
8
0
2008-08-19T05:16:34.980000
2008-08-19T05:33:49.390000
15,774
157,767
How do I automate repetitive tasks post-build?
I run an ASP.NET website solution with a few other projects in it. I've known that MSBuild projects are capable of this, but is it the best way? Are they easy to create? Is nAnt, CruiseControl.NET or any other solution better? When I build the site (using Web Deployment Projects ), can I automate part of the build so that it does not copy certain folders from the project into the Release folder? For instance, I have folders with local search indexes, images and other content part of the folder, but I never need or upload those when deploying the project. I'm also looking toward this type of solution to automatically increment build and version numbers.
Here's an example of a Web Deployment Project scripting this sort of task in the.wdproj file: This would allow you to delete a folder. (I suspect that if you wanted to not have the folder copy over at all, the solution file would be the place to specify that, though I haven't had to use that.)
How do I automate repetitive tasks post-build? I run an ASP.NET website solution with a few other projects in it. I've known that MSBuild projects are capable of this, but is it the best way? Are they easy to create? Is nAnt, CruiseControl.NET or any other solution better? When I build the site (using Web Deployment Projects ), can I automate part of the build so that it does not copy certain folders from the project into the Release folder? For instance, I have folders with local search indexes, images and other content part of the folder, but I never need or upload those when deploying the project. I'm also looking toward this type of solution to automatically increment build and version numbers.
TITLE: How do I automate repetitive tasks post-build? QUESTION: I run an ASP.NET website solution with a few other projects in it. I've known that MSBuild projects are capable of this, but is it the best way? Are they easy to create? Is nAnt, CruiseControl.NET or any other solution better? When I build the site (using Web Deployment Projects ), can I automate part of the build so that it does not copy certain folders from the project into the Release folder? For instance, I have folders with local search indexes, images and other content part of the folder, but I never need or upload those when deploying the project. I'm also looking toward this type of solution to automatically increment build and version numbers. ANSWER: Here's an example of a Web Deployment Project scripting this sort of task in the.wdproj file: This would allow you to delete a folder. (I suspect that if you wanted to not have the folder copy over at all, the solution file would be the place to specify that, though I haven't had to use that.)
[ "c#", "asp.net", "visual-studio-2005", "msbuild", "web-deployment-project" ]
4
8
2,479
7
0
2008-08-19T05:54:18.913000
2008-10-01T14:07:15.423000
15,788
15,819
Minimize javascript HTTP calls from AjaxControlToolkit controls?
I love the ease that the ASP.NET Ajax control Toolkit provides to ASP.NET developers. However, I've not found it to be a scalable solution at all. I have a page with 2 date input fields. By putting a TextBox and an CalendarExtendar control attached to it for each date field, I get several (11, in fact) external Javascript calls to /ScriptResource.axd?d=xxxx Is there any way to control this? Why does it suck so much? What's a better Ajax toolkit for.NET that can provide equal (or better) set of functionality that is more scalable, straight forward and completely customizable? NOT looking to reinvent the wheel here.
I always preferred to write my Ajax calls in javascript using JQuery or Prototype. ASP.NET Ajax Toolkit does make things easier, but it never seems to do so elegantly. I personally would make a new Calendar Controller. This way you can control the AJAX (using JQuery/Prototype) calls that are being made.
Minimize javascript HTTP calls from AjaxControlToolkit controls? I love the ease that the ASP.NET Ajax control Toolkit provides to ASP.NET developers. However, I've not found it to be a scalable solution at all. I have a page with 2 date input fields. By putting a TextBox and an CalendarExtendar control attached to it for each date field, I get several (11, in fact) external Javascript calls to /ScriptResource.axd?d=xxxx Is there any way to control this? Why does it suck so much? What's a better Ajax toolkit for.NET that can provide equal (or better) set of functionality that is more scalable, straight forward and completely customizable? NOT looking to reinvent the wheel here.
TITLE: Minimize javascript HTTP calls from AjaxControlToolkit controls? QUESTION: I love the ease that the ASP.NET Ajax control Toolkit provides to ASP.NET developers. However, I've not found it to be a scalable solution at all. I have a page with 2 date input fields. By putting a TextBox and an CalendarExtendar control attached to it for each date field, I get several (11, in fact) external Javascript calls to /ScriptResource.axd?d=xxxx Is there any way to control this? Why does it suck so much? What's a better Ajax toolkit for.NET that can provide equal (or better) set of functionality that is more scalable, straight forward and completely customizable? NOT looking to reinvent the wheel here. ANSWER: I always preferred to write my Ajax calls in javascript using JQuery or Prototype. ASP.NET Ajax Toolkit does make things easier, but it never seems to do so elegantly. I personally would make a new Calendar Controller. This way you can control the AJAX (using JQuery/Prototype) calls that are being made.
[ "javascript", "asp.net", "ajax", "frontend" ]
4
3
638
2
0
2008-08-19T06:14:42.770000
2008-08-19T07:06:39.903000
15,798
15,931
How do I validate xml against a DTD file in Python
I need to validate an XML string (and not a file) against a DTD description file. How can that be done in python?
Another good option is lxml's validation which I find quite pleasant to use. A simple example taken from the lxml site: from StringIO import StringIO from lxml import etree dtd = etree.DTD(StringIO(""" """)) root = etree.XML(" ") print(dtd.validate(root)) # True root = etree.XML(" bar ") print(dtd.validate(root)) # False print(dtd.error_log.filter_from_errors()) #:1:0:ERROR:VALID:DTD_NOT_EMPTY: Element foo was declared EMPTY this one has content
How do I validate xml against a DTD file in Python I need to validate an XML string (and not a file) against a DTD description file. How can that be done in python?
TITLE: How do I validate xml against a DTD file in Python QUESTION: I need to validate an XML string (and not a file) against a DTD description file. How can that be done in python? ANSWER: Another good option is lxml's validation which I find quite pleasant to use. A simple example taken from the lxml site: from StringIO import StringIO from lxml import etree dtd = etree.DTD(StringIO(""" """)) root = etree.XML(" ") print(dtd.validate(root)) # True root = etree.XML(" bar ") print(dtd.validate(root)) # False print(dtd.error_log.filter_from_errors()) #:1:0:ERROR:VALID:DTD_NOT_EMPTY: Element foo was declared EMPTY this one has content
[ "python", "xml", "validation", "dtd" ]
31
33
10,947
2
0
2008-08-19T06:24:54.217000
2008-08-19T09:39:56.927000
15,805
15,808
Why are my auto-run applications acting weird on Vista?
The product we are working on allows the user to easily set it up to run automatically whenever the computer is started. This is helpful because the product is part of the basic work environment of most of our users. This feature was implemented not so long ago and for a while all was well, but when we started testing this feature on Vista the product started behaving really weird on startup. Specifically, our product makes use of another product (lets call it X) that it launches whenever it needs its services. The actual problem is that whenever X is launched immediately after log-on, it crashes or reports critical errors related to disk access (this happens even when X is launched directly - not through our product). This happens whenever we run our product by registering it in the "Run" key in the registry or place a shortcut to it in the " Startup " folder inside the " Start Menu ", even when we put a delay of ~20 seconds before actually starting to run. When we changed the delay to 70 seconds, all is well. We tried to reproduce the problem by launching our product manually immediately after logon (by double-clicking on a shortcut placed on the desktop) but to no avail. Now how is it possible that applications that run normally a minute after logon report such hard errors when starting immediately after logon?
This is the effect of a new feature in Vista called "Boxing": Windows has several mechanisms that allow the user/admin to set up applications to automatically run when windows starts. This feature is mostly used for one of these purposes: 1. Programs that are part of the basic work environment of the user, such that the first action the user would usually take when starting the computer is to start them. 2. All sorts of background "agents" - skype, messenger, winamp etc. When too many (or too heavy) programs are registered to run on startup the end result is that the user can't actually do anything for the first few seconds/minutes after login, which can be really annoying. In comes Vista's "Boxing" feature: Briefly, Vista forces all programs invoked through the Run key to operate at low priority for the first 60 seconds after login. This affects both I/O priority (which is set to Very Low) and CPU priority. Very Low priority I/O requests do not pass through the file cache, but go directly to disk. Thus, they are much slower than regular I/O. The length of the boxing period is set by the registry value: "HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\DelayedApps\Delay_Sec". For a more detailed explanation see here and here
Why are my auto-run applications acting weird on Vista? The product we are working on allows the user to easily set it up to run automatically whenever the computer is started. This is helpful because the product is part of the basic work environment of most of our users. This feature was implemented not so long ago and for a while all was well, but when we started testing this feature on Vista the product started behaving really weird on startup. Specifically, our product makes use of another product (lets call it X) that it launches whenever it needs its services. The actual problem is that whenever X is launched immediately after log-on, it crashes or reports critical errors related to disk access (this happens even when X is launched directly - not through our product). This happens whenever we run our product by registering it in the "Run" key in the registry or place a shortcut to it in the " Startup " folder inside the " Start Menu ", even when we put a delay of ~20 seconds before actually starting to run. When we changed the delay to 70 seconds, all is well. We tried to reproduce the problem by launching our product manually immediately after logon (by double-clicking on a shortcut placed on the desktop) but to no avail. Now how is it possible that applications that run normally a minute after logon report such hard errors when starting immediately after logon?
TITLE: Why are my auto-run applications acting weird on Vista? QUESTION: The product we are working on allows the user to easily set it up to run automatically whenever the computer is started. This is helpful because the product is part of the basic work environment of most of our users. This feature was implemented not so long ago and for a while all was well, but when we started testing this feature on Vista the product started behaving really weird on startup. Specifically, our product makes use of another product (lets call it X) that it launches whenever it needs its services. The actual problem is that whenever X is launched immediately after log-on, it crashes or reports critical errors related to disk access (this happens even when X is launched directly - not through our product). This happens whenever we run our product by registering it in the "Run" key in the registry or place a shortcut to it in the " Startup " folder inside the " Start Menu ", even when we put a delay of ~20 seconds before actually starting to run. When we changed the delay to 70 seconds, all is well. We tried to reproduce the problem by launching our product manually immediately after logon (by double-clicking on a shortcut placed on the desktop) but to no avail. Now how is it possible that applications that run normally a minute after logon report such hard errors when starting immediately after logon? ANSWER: This is the effect of a new feature in Vista called "Boxing": Windows has several mechanisms that allow the user/admin to set up applications to automatically run when windows starts. This feature is mostly used for one of these purposes: 1. Programs that are part of the basic work environment of the user, such that the first action the user would usually take when starting the computer is to start them. 2. All sorts of background "agents" - skype, messenger, winamp etc. When too many (or too heavy) programs are registered to run on startup the end result is that the user can't actually do anything for the first few seconds/minutes after login, which can be really annoying. In comes Vista's "Boxing" feature: Briefly, Vista forces all programs invoked through the Run key to operate at low priority for the first 60 seconds after login. This affects both I/O priority (which is set to Very Low) and CPU priority. Very Low priority I/O requests do not pass through the file cache, but go directly to disk. Thus, they are much slower than regular I/O. The length of the boxing period is set by the registry value: "HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\DelayedApps\Delay_Sec". For a more detailed explanation see here and here
[ "windows-vista", "virtual-pc" ]
6
6
648
2
0
2008-08-19T06:36:13.290000
2008-08-19T06:43:55.987000
15,815
15,919
SSRS - Post Publishing Tasks
As part of the publishing "best practices" I came up with my own, I tend to archive report groups and republish the "updated" reports. However, with this stratedgy, I lose users associated with each report or have to rehide reports. Is there an automated process I can use to hide reports or add users, after deploying form Visual Studio?
Paul Stovell posted some examples of Reporting Services automation that might get you going. EDIT: The link to the Subversion repository has been updated and is now working
SSRS - Post Publishing Tasks As part of the publishing "best practices" I came up with my own, I tend to archive report groups and republish the "updated" reports. However, with this stratedgy, I lose users associated with each report or have to rehide reports. Is there an automated process I can use to hide reports or add users, after deploying form Visual Studio?
TITLE: SSRS - Post Publishing Tasks QUESTION: As part of the publishing "best practices" I came up with my own, I tend to archive report groups and republish the "updated" reports. However, with this stratedgy, I lose users associated with each report or have to rehide reports. Is there an automated process I can use to hide reports or add users, after deploying form Visual Studio? ANSWER: Paul Stovell posted some examples of Reporting Services automation that might get you going. EDIT: The link to the Subversion repository has been updated and is now working
[ "reporting-services", "automation" ]
0
0
183
1
0
2008-08-19T06:56:29.427000
2008-08-19T09:26:49.967000
15,816
1,083,668
Changing the resolution of a VNC session in linux
I use VNC to connect to a Linux workstation at work. At work I have a 20" monitor that runs at 1600x1200, while at home I use my laptop with its resolution of 1440x900. If I set the vncserver to run at 1440x900 I miss out on a lot of space on my monitor, whereas if I set it to run at 1600x1200 it doesn't fit on the laptop's screen, and I have to scroll it all the time. Is there any good way to resize a VNC session on the fly? My VNC server is RealVNC E4.x (I don't remember the exact version) running on SuSE64.
Real VNC server 4.4 includes support for Xrandr, which allows resizing the VNC. Start the server with: vncserver -geometry 1600x1200 -randr 1600x1200,1440x900,1024x768 Then resize with: xrandr -s 1600x1200 xrandr -s 1440x900 xrandr -s 1024x768
Changing the resolution of a VNC session in linux I use VNC to connect to a Linux workstation at work. At work I have a 20" monitor that runs at 1600x1200, while at home I use my laptop with its resolution of 1440x900. If I set the vncserver to run at 1440x900 I miss out on a lot of space on my monitor, whereas if I set it to run at 1600x1200 it doesn't fit on the laptop's screen, and I have to scroll it all the time. Is there any good way to resize a VNC session on the fly? My VNC server is RealVNC E4.x (I don't remember the exact version) running on SuSE64.
TITLE: Changing the resolution of a VNC session in linux QUESTION: I use VNC to connect to a Linux workstation at work. At work I have a 20" monitor that runs at 1600x1200, while at home I use my laptop with its resolution of 1440x900. If I set the vncserver to run at 1440x900 I miss out on a lot of space on my monitor, whereas if I set it to run at 1600x1200 it doesn't fit on the laptop's screen, and I have to scroll it all the time. Is there any good way to resize a VNC session on the fly? My VNC server is RealVNC E4.x (I don't remember the exact version) running on SuSE64. ANSWER: Real VNC server 4.4 includes support for Xrandr, which allows resizing the VNC. Start the server with: vncserver -geometry 1600x1200 -randr 1600x1200,1440x900,1024x768 Then resize with: xrandr -s 1600x1200 xrandr -s 1440x900 xrandr -s 1024x768
[ "linux", "vnc" ]
176
178
586,049
15
0
2008-08-19T06:56:39.440000
2009-07-05T08:18:58.250000
15,828
16,051
Reading Excel files from C#
Is there a free or open source library to read Excel files (.xls) directly from a C# program? It does not need to be too fancy, just to select a worksheet and read the data as strings. So far, I've been using Export to Unicode text function of Excel, and parsing the resulting (tab-delimited) file, but I'd like to eliminate the manual step.
var fileName = string.Format("{0}\\fileNameHere", Directory.GetCurrentDirectory()); var connectionString = string.Format("Provider=Microsoft.Jet.OLEDB.4.0; data source={0}; Extended Properties=Excel 8.0;", fileName); var adapter = new OleDbDataAdapter("SELECT * FROM [workSheetNameHere$]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "anyNameHere"); DataTable data = ds.Tables["anyNameHere"]; This is what I usually use. It is a little different because I usually stick a AsEnumerable() at the edit of the tables: var data = ds.Tables["anyNameHere"].AsEnumerable(); as this lets me use LINQ to search and build structs from the fields. var query = data.Where(x => x.Field ("phoneNumber")!= string.Empty).Select(x => new MyContact { firstName= x.Field ("First Name"), lastName = x.Field ("Last Name"), phoneNumber =x.Field ("Phone Number"), });
Reading Excel files from C# Is there a free or open source library to read Excel files (.xls) directly from a C# program? It does not need to be too fancy, just to select a worksheet and read the data as strings. So far, I've been using Export to Unicode text function of Excel, and parsing the resulting (tab-delimited) file, but I'd like to eliminate the manual step.
TITLE: Reading Excel files from C# QUESTION: Is there a free or open source library to read Excel files (.xls) directly from a C# program? It does not need to be too fancy, just to select a worksheet and read the data as strings. So far, I've been using Export to Unicode text function of Excel, and parsing the resulting (tab-delimited) file, but I'd like to eliminate the manual step. ANSWER: var fileName = string.Format("{0}\\fileNameHere", Directory.GetCurrentDirectory()); var connectionString = string.Format("Provider=Microsoft.Jet.OLEDB.4.0; data source={0}; Extended Properties=Excel 8.0;", fileName); var adapter = new OleDbDataAdapter("SELECT * FROM [workSheetNameHere$]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "anyNameHere"); DataTable data = ds.Tables["anyNameHere"]; This is what I usually use. It is a little different because I usually stick a AsEnumerable() at the edit of the tables: var data = ds.Tables["anyNameHere"].AsEnumerable(); as this lets me use LINQ to search and build structs from the fields. var query = data.Where(x => x.Field ("phoneNumber")!= string.Empty).Select(x => new MyContact { firstName= x.Field ("First Name"), lastName = x.Field ("Last Name"), phoneNumber =x.Field ("Phone Number"), });
[ "c#", ".net", "excel", "ms-office" ]
232
152
535,091
32
0
2008-08-19T07:23:46.703000
2008-08-19T12:18:20.197000
15,838
15,846
Optimizing a search algorithm in C
Can the performance of this sequential search algorithm (taken from The Practice of Programming ) be improved using any of C's native utilities, e.g. if I set the i variable to be a register variable? int lookup(char *word, char*array[]) { int i for (i = 0; array[i]!= NULL; i++) if (strcmp(word, array[i]) == 0) return i; return -1; }
Yes, but only very slightly. A much bigger performance improvement can be achieved by using better algorithms (for example keeping the list sorted and doing a binary search). In general optimizing a given algorithm only gets you so far. Choosing a better algorithm (even if it's not completely optimized) can give you a considerable (order of magnitude) performance improvement.
Optimizing a search algorithm in C Can the performance of this sequential search algorithm (taken from The Practice of Programming ) be improved using any of C's native utilities, e.g. if I set the i variable to be a register variable? int lookup(char *word, char*array[]) { int i for (i = 0; array[i]!= NULL; i++) if (strcmp(word, array[i]) == 0) return i; return -1; }
TITLE: Optimizing a search algorithm in C QUESTION: Can the performance of this sequential search algorithm (taken from The Practice of Programming ) be improved using any of C's native utilities, e.g. if I set the i variable to be a register variable? int lookup(char *word, char*array[]) { int i for (i = 0; array[i]!= NULL; i++) if (strcmp(word, array[i]) == 0) return i; return -1; } ANSWER: Yes, but only very slightly. A much bigger performance improvement can be achieved by using better algorithms (for example keeping the list sorted and doing a binary search). In general optimizing a given algorithm only gets you so far. Choosing a better algorithm (even if it's not completely optimized) can give you a considerable (order of magnitude) performance improvement.
[ "c", "performance", "algorithm", "optimization" ]
9
24
3,809
10
0
2008-08-19T07:28:31.967000
2008-08-19T07:33:20.943000
15,851
15,946
Refactoring away labeled loops
After I was convinced that labeled breaks/continues are a total "nono" over here, I need help to remove the label out of my code. I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop. I hope, the code-fragment is basically understandable… vectorLoop: for( int idx = 0; idx < vectorLength; idx++) { if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop; matrixLoop: for( rowIdx = 0; rowIdx < n; rowIdx++ ) { if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop; if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop; } setValueInVector( v, idx ); } Please convince me, that there is a more readable/better version without the labels.
Looking at the solutions presented so far: They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything. Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly. Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed. All of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
Refactoring away labeled loops After I was convinced that labeled breaks/continues are a total "nono" over here, I need help to remove the label out of my code. I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop. I hope, the code-fragment is basically understandable… vectorLoop: for( int idx = 0; idx < vectorLength; idx++) { if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop; matrixLoop: for( rowIdx = 0; rowIdx < n; rowIdx++ ) { if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop; if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop; } setValueInVector( v, idx ); } Please convince me, that there is a more readable/better version without the labels.
TITLE: Refactoring away labeled loops QUESTION: After I was convinced that labeled breaks/continues are a total "nono" over here, I need help to remove the label out of my code. I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop. I hope, the code-fragment is basically understandable… vectorLoop: for( int idx = 0; idx < vectorLength; idx++) { if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop; matrixLoop: for( rowIdx = 0; rowIdx < n; rowIdx++ ) { if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop; if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop; } setValueInVector( v, idx ); } Please convince me, that there is a more readable/better version without the labels. ANSWER: Looking at the solutions presented so far: They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything. Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly. Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed. All of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
[ "java", "refactoring", "label" ]
19
34
2,126
12
0
2008-08-19T07:42:02.407000
2008-08-19T10:14:51.523000
15,871
15,877
Getting developers to use a wiki
I work on a complex application where different teams work on their own modules with a degree of overlap. A while back we got a Mediawiki instance set up, partly at my prompting. I have a hard job getting people to actually use it, let alone contribute. I can see a lot of benefit in sharing information. It may at least reduce the times we reinvent the wheel. The wiki is not very structured, but I'm not sure that is a problem as long as you can search for what you need. Any hints?
As I mentioned before, a Wiki is very unorganized. However, if that is the only argument from your developers, then invest some effort to create a simple index page and keep it updated (either do it yourself or ask people to link their contributions to the index). That way, the Wiki might grow into a very nice and quite comprehensive collection of documentation for all your work.
Getting developers to use a wiki I work on a complex application where different teams work on their own modules with a degree of overlap. A while back we got a Mediawiki instance set up, partly at my prompting. I have a hard job getting people to actually use it, let alone contribute. I can see a lot of benefit in sharing information. It may at least reduce the times we reinvent the wheel. The wiki is not very structured, but I'm not sure that is a problem as long as you can search for what you need. Any hints?
TITLE: Getting developers to use a wiki QUESTION: I work on a complex application where different teams work on their own modules with a degree of overlap. A while back we got a Mediawiki instance set up, partly at my prompting. I have a hard job getting people to actually use it, let alone contribute. I can see a lot of benefit in sharing information. It may at least reduce the times we reinvent the wheel. The wiki is not very structured, but I'm not sure that is a problem as long as you can search for what you need. Any hints? ANSWER: As I mentioned before, a Wiki is very unorganized. However, if that is the only argument from your developers, then invest some effort to create a simple index page and keep it updated (either do it yourself or ask people to link their contributions to the index). That way, the Wiki might grow into a very nice and quite comprehensive collection of documentation for all your work.
[ "documentation", "wiki", "mediawiki" ]
17
8
1,692
13
0
2008-08-19T07:57:20.100000
2008-08-19T08:05:53.623000
15,880
19,859
Read from .msg files
I need to read from Outlook.MSG file in.NET without using COM API for Outlook (cos it will not be installed on the machines that my app will run). Are there any free 3rd party libraries to do that? I want to extract From, To, CC and BCC fields. Sent/Receive date fields would be good if they are also stored in MSG files.
Update: I have found a 3rd party COM library called Outlook Redemption which is working fine for me at the moment. If you use it via COM-Interop in.NET, don't forget to release every COM object after you are done with it, otherwise your application crashes randomly.
Read from .msg files I need to read from Outlook.MSG file in.NET without using COM API for Outlook (cos it will not be installed on the machines that my app will run). Are there any free 3rd party libraries to do that? I want to extract From, To, CC and BCC fields. Sent/Receive date fields would be good if they are also stored in MSG files.
TITLE: Read from .msg files QUESTION: I need to read from Outlook.MSG file in.NET without using COM API for Outlook (cos it will not be installed on the machines that my app will run). Are there any free 3rd party libraries to do that? I want to extract From, To, CC and BCC fields. Sent/Receive date fields would be good if they are also stored in MSG files. ANSWER: Update: I have found a 3rd party COM library called Outlook Redemption which is working fine for me at the moment. If you use it via COM-Interop in.NET, don't forget to release every COM object after you are done with it, otherwise your application crashes randomly.
[ "c#", "outlook", "email", "message", "msg" ]
18
9
42,868
7
0
2008-08-19T08:07:28.037000
2008-08-21T13:27:06.583000
15,899
16,019
How to insert/replace XML tag in XmlDocument?
I have a XmlDocument in java, created with the Weblogic XmlDocument parser. I want to replace the content of a tag in this XMLDocument with my own data, or insert the tag if it isn't there. mfkdslmlfkm For example I want to insert a URL in the location tag: http://something but otherwise leave the XML as is. Currently I use a XMLCursor: XmlObject xmlobj = XmlObject.Factory.parse(a.getCustomData(), options); XmlCursor xmlcur = xmlobj.newCursor(); while (xmlcur.hasNextToken()) { boolean found = false; if (xmlcur.isStart() && "schema-location".equals(xmlcur.getName().toString())) { xmlcur.setTextValue("http://replaced"); System.out.println("replaced"); found = true; } else if (xmlcur.isStart() && "customdata".equals(xmlcur.getName().toString())) { xmlcur.push(); } else if (xmlcur.isEnddoc()) { if (!found) { xmlcur.pop(); xmlcur.toEndToken(); xmlcur.insertElementWithText("schema-location", "http://inserted"); System.out.println("inserted"); } } xmlcur.toNextToken(); } I tried to find a "quick" xquery way to do this since the XmlDocument has an execQuery method, but didn't find it very easy. Do anyone have a better way than this? It seems a bit elaborate.
How about an XPath based approach? I like this approach as the logic is super-easy to understand. The code is pretty much self-documenting. If your xml document is available to you as an org.w3c.dom.Document object (as most parsers return), then you could do something like the following: // get the list of customdata nodes NodeList customDataNodeSet = findNodes(document, "//customdata" ); for (int i=0; i < customDataNodeSet.getLength(); i++) { Node customDataNode = customDataNodeSet.item( i ); // get the location nodes (if any) within this one customdata node NodeList locationNodeSet = findNodes(customDataNode, "location" ); if (locationNodeSet.getLength() > 0) { // replace locationNodeSet.item( 0 ).setTextContent( "http://stackoverflow.com/" ); } else { // insert Element newLocationNode = document.createElement( "location" ); newLocationNode.setTextContent("http://stackoverflow.com/" ); customDataNode.appendChild( newLocationNode ); } } And here's the helper method findNodes that does the XPath search. private NodeList findNodes( Object obj, String xPathString ) throws XPathExpressionException { XPath xPath = XPathFactory.newInstance().newXPath(); XPathExpression expression = xPath.compile( xPathString ); return (NodeList) expression.evaluate( obj, XPathConstants.NODESET ); }
How to insert/replace XML tag in XmlDocument? I have a XmlDocument in java, created with the Weblogic XmlDocument parser. I want to replace the content of a tag in this XMLDocument with my own data, or insert the tag if it isn't there. mfkdslmlfkm For example I want to insert a URL in the location tag: http://something but otherwise leave the XML as is. Currently I use a XMLCursor: XmlObject xmlobj = XmlObject.Factory.parse(a.getCustomData(), options); XmlCursor xmlcur = xmlobj.newCursor(); while (xmlcur.hasNextToken()) { boolean found = false; if (xmlcur.isStart() && "schema-location".equals(xmlcur.getName().toString())) { xmlcur.setTextValue("http://replaced"); System.out.println("replaced"); found = true; } else if (xmlcur.isStart() && "customdata".equals(xmlcur.getName().toString())) { xmlcur.push(); } else if (xmlcur.isEnddoc()) { if (!found) { xmlcur.pop(); xmlcur.toEndToken(); xmlcur.insertElementWithText("schema-location", "http://inserted"); System.out.println("inserted"); } } xmlcur.toNextToken(); } I tried to find a "quick" xquery way to do this since the XmlDocument has an execQuery method, but didn't find it very easy. Do anyone have a better way than this? It seems a bit elaborate.
TITLE: How to insert/replace XML tag in XmlDocument? QUESTION: I have a XmlDocument in java, created with the Weblogic XmlDocument parser. I want to replace the content of a tag in this XMLDocument with my own data, or insert the tag if it isn't there. mfkdslmlfkm For example I want to insert a URL in the location tag: http://something but otherwise leave the XML as is. Currently I use a XMLCursor: XmlObject xmlobj = XmlObject.Factory.parse(a.getCustomData(), options); XmlCursor xmlcur = xmlobj.newCursor(); while (xmlcur.hasNextToken()) { boolean found = false; if (xmlcur.isStart() && "schema-location".equals(xmlcur.getName().toString())) { xmlcur.setTextValue("http://replaced"); System.out.println("replaced"); found = true; } else if (xmlcur.isStart() && "customdata".equals(xmlcur.getName().toString())) { xmlcur.push(); } else if (xmlcur.isEnddoc()) { if (!found) { xmlcur.pop(); xmlcur.toEndToken(); xmlcur.insertElementWithText("schema-location", "http://inserted"); System.out.println("inserted"); } } xmlcur.toNextToken(); } I tried to find a "quick" xquery way to do this since the XmlDocument has an execQuery method, but didn't find it very easy. Do anyone have a better way than this? It seems a bit elaborate. ANSWER: How about an XPath based approach? I like this approach as the logic is super-easy to understand. The code is pretty much self-documenting. If your xml document is available to you as an org.w3c.dom.Document object (as most parsers return), then you could do something like the following: // get the list of customdata nodes NodeList customDataNodeSet = findNodes(document, "//customdata" ); for (int i=0; i < customDataNodeSet.getLength(); i++) { Node customDataNode = customDataNodeSet.item( i ); // get the location nodes (if any) within this one customdata node NodeList locationNodeSet = findNodes(customDataNode, "location" ); if (locationNodeSet.getLength() > 0) { // replace locationNodeSet.item( 0 ).setTextContent( "http://stackoverflow.com/" ); } else { // insert Element newLocationNode = document.createElement( "location" ); newLocationNode.setTextContent("http://stackoverflow.com/" ); customDataNode.appendChild( newLocationNode ); } } And here's the helper method findNodes that does the XPath search. private NodeList findNodes( Object obj, String xPathString ) throws XPathExpressionException { XPath xPath = XPathFactory.newInstance().newXPath(); XPathExpression expression = xPath.compile( xPathString ); return (NodeList) expression.evaluate( obj, XPathConstants.NODESET ); }
[ "java", "xml", "xquery" ]
6
5
5,327
4
0
2008-08-19T08:46:08.207000
2008-08-19T11:31:10.433000
15,917
142,767
Data Auditing in NHibernate and SqlServer
I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface. What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article?
For NHibernate 2.0, you should also look at Event Listeners. These are the evolution of the IInterceptor interface and we use them successfully for auditing.
Data Auditing in NHibernate and SqlServer I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface. What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article?
TITLE: Data Auditing in NHibernate and SqlServer QUESTION: I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface. What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article? ANSWER: For NHibernate 2.0, you should also look at Event Listeners. These are the evolution of the IInterceptor interface and we use them successfully for auditing.
[ "sql-server", "nhibernate", "audit" ]
15
14
8,594
6
0
2008-08-19T09:24:08.583000
2008-09-27T02:16:25.247000
15,949
16,168
Java+Tomcat, Dying database connection?
I have a tomcat instance setup but the database connection I have configured in context.xml keeps dying after periods of inactivity. When I check the logs I get the following error: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was68051 seconds ago. The last packet sent successfully to the server was 68051 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Here is the configuration in context.xml: I am using autoReconnect=true like the error says to do, but the connection keeps dying. I have never seen this happen before. I have also verified that all database connections are being closed properly.
Tomcat Documentation DBCP uses the Jakarta-Commons Database Connection Pool. It relies on number of Jakarta-Commons components: * Jakarta-Commons DBCP * Jakarta-Commons Collections * Jakarta-Commons Pool This attribute may help you out. removeAbandonedTimeout="60" I'm using the same connection pooling stuff and I'm setting these properties to prevent the same thing it's just not configured through tomcat. But if the first thing doesn't work try these. testWhileIdle=true timeBetweenEvictionRunsMillis=300000
Java+Tomcat, Dying database connection? I have a tomcat instance setup but the database connection I have configured in context.xml keeps dying after periods of inactivity. When I check the logs I get the following error: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was68051 seconds ago. The last packet sent successfully to the server was 68051 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Here is the configuration in context.xml: I am using autoReconnect=true like the error says to do, but the connection keeps dying. I have never seen this happen before. I have also verified that all database connections are being closed properly.
TITLE: Java+Tomcat, Dying database connection? QUESTION: I have a tomcat instance setup but the database connection I have configured in context.xml keeps dying after periods of inactivity. When I check the logs I get the following error: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was68051 seconds ago. The last packet sent successfully to the server was 68051 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Here is the configuration in context.xml: I am using autoReconnect=true like the error says to do, but the connection keeps dying. I have never seen this happen before. I have also verified that all database connections are being closed properly. ANSWER: Tomcat Documentation DBCP uses the Jakarta-Commons Database Connection Pool. It relies on number of Jakarta-Commons components: * Jakarta-Commons DBCP * Jakarta-Commons Collections * Jakarta-Commons Pool This attribute may help you out. removeAbandonedTimeout="60" I'm using the same connection pooling stuff and I'm setting these properties to prevent the same thing it's just not configured through tomcat. But if the first thing doesn't work try these. testWhileIdle=true timeBetweenEvictionRunsMillis=300000
[ "java", "mysql", "database", "tomcat" ]
17
11
45,001
4
0
2008-08-19T10:17:24.090000
2008-08-19T13:40:10.283000
16,003
16,175
LocationProvider
We need to replace the menu system in our main ASP.NET application. So naturally we're looking at the ASP.NET SiteMapProvider and Menu controls. However we also need enough security to prevent users from directly entering URLs that they shouldn't have access to. We can do this by putting entries in web.config and securing them individually but that's going to be a PITA to manage across multiple web servers. Is there a Provider that can be used to, well, provide the equivalent of the entries? I haven't been able to find one, and it's slightly frustrating given the existence of the ConfigurationLocation class. Alternatively is there a configuration option we're missing in SiteMapProvider that will restrict users from getting to URLs they shouldn't?
Why don't you create rights & profiles to manage wich pages a user can see? I usually create a user class which implements the IPrincipal security interface. On every request to your application, you check the rights of a particular user and output the SiteMap's nodes allowed for this user.
LocationProvider We need to replace the menu system in our main ASP.NET application. So naturally we're looking at the ASP.NET SiteMapProvider and Menu controls. However we also need enough security to prevent users from directly entering URLs that they shouldn't have access to. We can do this by putting entries in web.config and securing them individually but that's going to be a PITA to manage across multiple web servers. Is there a Provider that can be used to, well, provide the equivalent of the entries? I haven't been able to find one, and it's slightly frustrating given the existence of the ConfigurationLocation class. Alternatively is there a configuration option we're missing in SiteMapProvider that will restrict users from getting to URLs they shouldn't?
TITLE: LocationProvider QUESTION: We need to replace the menu system in our main ASP.NET application. So naturally we're looking at the ASP.NET SiteMapProvider and Menu controls. However we also need enough security to prevent users from directly entering URLs that they shouldn't have access to. We can do this by putting entries in web.config and securing them individually but that's going to be a PITA to manage across multiple web servers. Is there a Provider that can be used to, well, provide the equivalent of the entries? I haven't been able to find one, and it's slightly frustrating given the existence of the ConfigurationLocation class. Alternatively is there a configuration option we're missing in SiteMapProvider that will restrict users from getting to URLs they shouldn't? ANSWER: Why don't you create rights & profiles to manage wich pages a user can see? I usually create a user class which implements the IPrincipal security interface. On every request to your application, you check the rights of a particular user and output the SiteMap's nodes allowed for this user.
[ "asp.net", "security", "sitemap" ]
2
1
182
1
0
2008-08-19T11:22:00.873000
2008-08-19T13:44:31.223000
16,007
16,039
How do I convert a file path to a URL in ASP.NET
Basically I have some code to check a specific directory to see if an image is there and if so I want to assign a URL to the image to an ImageControl. if (System.IO.Directory.Exists(photosLocation)) { string[] files = System.IO.Directory.GetFiles(photosLocation, "*.jpg"); if (files.Length > 0) { // TODO: return the url of the first file found; } }
As far as I know, there's no method to do what you want; at least not directly. I'd store the photosLocation as a path relative to the application; for example: "~/Images/". This way, you could use MapPath to get the physical location, and ResolveUrl to get the URL (with a bit of help from System.IO.Path ): string photosLocationPath = HttpContext.Current.Server.MapPath(photosLocation); if (Directory.Exists(photosLocationPath)) { string[] files = Directory.GetFiles(photosLocationPath, "*.jpg"); if (files.Length > 0) { string filenameRelative = photosLocation + Path.GetFilename(files[0]) return Page.ResolveUrl(filenameRelative); } }
How do I convert a file path to a URL in ASP.NET Basically I have some code to check a specific directory to see if an image is there and if so I want to assign a URL to the image to an ImageControl. if (System.IO.Directory.Exists(photosLocation)) { string[] files = System.IO.Directory.GetFiles(photosLocation, "*.jpg"); if (files.Length > 0) { // TODO: return the url of the first file found; } }
TITLE: How do I convert a file path to a URL in ASP.NET QUESTION: Basically I have some code to check a specific directory to see if an image is there and if so I want to assign a URL to the image to an ImageControl. if (System.IO.Directory.Exists(photosLocation)) { string[] files = System.IO.Directory.GetFiles(photosLocation, "*.jpg"); if (files.Length > 0) { // TODO: return the url of the first file found; } } ANSWER: As far as I know, there's no method to do what you want; at least not directly. I'd store the photosLocation as a path relative to the application; for example: "~/Images/". This way, you could use MapPath to get the physical location, and ResolveUrl to get the URL (with a bit of help from System.IO.Path ): string photosLocationPath = HttpContext.Current.Server.MapPath(photosLocation); if (Directory.Exists(photosLocationPath)) { string[] files = Directory.GetFiles(photosLocationPath, "*.jpg"); if (files.Length > 0) { string filenameRelative = photosLocation + Path.GetFilename(files[0]) return Page.ResolveUrl(filenameRelative); } }
[ "asp.net", "url", "image" ]
47
14
121,551
10
0
2008-08-19T11:24:55.907000
2008-08-19T12:02:18.700000
16,064
16,095
What IDE to use for developing in Ruby on Rails on Windows?
Possible Duplicate: What Ruby IDE do you prefer? I've generally been doing stuff on Microsoft.NET out of college almost 2 years ago. I just started looking at Ruby on Rails. So what editor should I use? I'm using Notepad++ right now but can I get debugging etc. somehow?
Try both NetBeans and RadRails for maybe a week each, then you can find which works best for you. The best advice is to learn your tool. If you are not checking out something new about your editor, something that could potentially save you time (regexp, etc) then you are doing yourself a huge disservice. I have been using Eclipse/Aptana/RadRails and unlike Gaius have been pretty happy with it. I recommend the Eclipse IDE for Java Developers from Eclipse Downloads: http://www.eclipse.org/downloads/ Then grab Aptana Studio, following these instructions. When Eclipse restarts Aptana will have a view, click on rad rails and you are good to go. Just make sure you have ruby installed already, or it becomes a pain to resolve.
What IDE to use for developing in Ruby on Rails on Windows? Possible Duplicate: What Ruby IDE do you prefer? I've generally been doing stuff on Microsoft.NET out of college almost 2 years ago. I just started looking at Ruby on Rails. So what editor should I use? I'm using Notepad++ right now but can I get debugging etc. somehow?
TITLE: What IDE to use for developing in Ruby on Rails on Windows? QUESTION: Possible Duplicate: What Ruby IDE do you prefer? I've generally been doing stuff on Microsoft.NET out of college almost 2 years ago. I just started looking at Ruby on Rails. So what editor should I use? I'm using Notepad++ right now but can I get debugging etc. somehow? ANSWER: Try both NetBeans and RadRails for maybe a week each, then you can find which works best for you. The best advice is to learn your tool. If you are not checking out something new about your editor, something that could potentially save you time (regexp, etc) then you are doing yourself a huge disservice. I have been using Eclipse/Aptana/RadRails and unlike Gaius have been pretty happy with it. I recommend the Eclipse IDE for Java Developers from Eclipse Downloads: http://www.eclipse.org/downloads/ Then grab Aptana Studio, following these instructions. When Eclipse restarts Aptana will have a view, click on rad rails and you are good to go. Just make sure you have ruby installed already, or it becomes a pain to resolve.
[ "ruby-on-rails", "ruby", "ide", "editor" ]
9
7
9,233
10
0
2008-08-19T12:30:25.977000
2008-08-19T12:47:25.613000
16,067
1,661,276
Prototyping with Python code before compiling
I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually. IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP? For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
Finally a question that I can really put a value answer to:). I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown: Disclaimer: This is my personal experience. I am not involved with any of these projects. swig: does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it. Ctypes: I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs. Boost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python. Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop. Timings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython. Summary: For your problem, use Cython;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question. Edit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
Prototyping with Python code before compiling I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually. IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP? For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
TITLE: Prototyping with Python code before compiling QUESTION: I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually. IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP? For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential. ANSWER: Finally a question that I can really put a value answer to:). I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown: Disclaimer: This is my personal experience. I am not involved with any of these projects. swig: does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it. Ctypes: I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs. Boost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python. Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop. Timings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython. Summary: For your problem, use Cython;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question. Edit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
[ "python", "swig", "ctypes", "prototyping", "python-sip" ]
22
36
4,868
7
0
2008-08-19T12:32:38.903000
2009-11-02T13:16:20.447000
16,096
167,308
How to apply multiple styles in WPF
In WPF, how would I apply multiple styles to a FrameworkElement? For instance, I have a control which already has a style. I also have a separate style which I would like to add to it without blowing away the first one. The styles have different TargetTypes, so I can't just extend one with the other.
I think the simple answer is that you can't do (at least in this version of WPF) what you are trying to do. That is, for any particular element only one Style can be applied. However, as others have stated above, maybe you can use BasedOn to help you out. Check out the following piece of loose xaml. In it you will see that I have a base style that is setting a property that exists on the base class of the element that I want to apply two styles to. And, in the second style which is based on the base style, I set another property. So, the idea here... is if you can somehow separate the properties that you want to set... according the inheritance hierarchy of the element you want to set multiple styles on... you might have a workaround. Note: One thing in particular to note. If you change the TargetType in the second style (in first set of xaml above) to ButtonBase, the two Styles do not get applied. However, check out the following xaml below to get around that restriction. Basically, it means you need to give the Style a key and reference it with that key.
How to apply multiple styles in WPF In WPF, how would I apply multiple styles to a FrameworkElement? For instance, I have a control which already has a style. I also have a separate style which I would like to add to it without blowing away the first one. The styles have different TargetTypes, so I can't just extend one with the other.
TITLE: How to apply multiple styles in WPF QUESTION: In WPF, how would I apply multiple styles to a FrameworkElement? For instance, I have a control which already has a style. I also have a separate style which I would like to add to it without blowing away the first one. The styles have different TargetTypes, so I can't just extend one with the other. ANSWER: I think the simple answer is that you can't do (at least in this version of WPF) what you are trying to do. That is, for any particular element only one Style can be applied. However, as others have stated above, maybe you can use BasedOn to help you out. Check out the following piece of loose xaml. In it you will see that I have a base style that is setting a property that exists on the base class of the element that I want to apply two styles to. And, in the second style which is based on the base style, I set another property. So, the idea here... is if you can somehow separate the properties that you want to set... according the inheritance hierarchy of the element you want to set multiple styles on... you might have a workaround. Note: One thing in particular to note. If you change the TargetType in the second style (in first set of xaml above) to ButtonBase, the two Styles do not get applied. However, check out the following xaml below to get around that restriction. Basically, it means you need to give the Style a key and reference it with that key.
[ ".net", "wpf", "styles" ]
170
174
115,594
12
0
2008-08-19T12:47:44.833000
2008-10-03T15:03:47.363000
16,097
16,101
What is the best way to replicate a version control repository?
Here is the scenario that I have. I have a cvs repository in one location (A) and I want to replicate it and keep it in sync with a repository in another location(B). This would be a single directional sync from A to B. What is the best way to do this? If it is not really feasible in CVS then which source code control system would you recommend to accomplish this? Thanks
When using CVS, I don't know any tools to do that other than files syncing. You can achieve that using tools like rsync (Unix) or xcopy / robocopy (Windows). If you plan on migrating to Subversion, it provides a tool called svnsync that allows to sync a repository from another one.
What is the best way to replicate a version control repository? Here is the scenario that I have. I have a cvs repository in one location (A) and I want to replicate it and keep it in sync with a repository in another location(B). This would be a single directional sync from A to B. What is the best way to do this? If it is not really feasible in CVS then which source code control system would you recommend to accomplish this? Thanks
TITLE: What is the best way to replicate a version control repository? QUESTION: Here is the scenario that I have. I have a cvs repository in one location (A) and I want to replicate it and keep it in sync with a repository in another location(B). This would be a single directional sync from A to B. What is the best way to do this? If it is not really feasible in CVS then which source code control system would you recommend to accomplish this? Thanks ANSWER: When using CVS, I don't know any tools to do that other than files syncing. You can achieve that using tools like rsync (Unix) or xcopy / robocopy (Windows). If you plan on migrating to Subversion, it provides a tool called svnsync that allows to sync a repository from another one.
[ "version-control", "cvs" ]
3
4
1,082
4
0
2008-08-19T12:48:21.477000
2008-08-19T12:52:05.653000
16,100
16,104
Convert a string to an enum in C#
What's the best way to convert a string to an enumeration value in C#? I have an HTML select tag containing the values of an enumeration. When the page is posted, I want to pick up the value (which will be in the form of a string) and convert it to the corresponding enumeration value. In an ideal world, I could do something like this: StatusEnum MyStatus = StatusEnum.Parse("Active"); but that isn't valid code.
In.NET Core and.NET Framework ≥4.0 there is a generic parse method: Enum.TryParse("Active", out StatusEnum myStatus); This also includes C#7's new inline out variables, so this does the try-parse, conversion to the explicit enum type and initialises+populates the myStatus variable. If you have access to C#7 and the latest.NET this is the best way. Original Answer In.NET it's rather ugly (until 4 or above): StatusEnum MyStatus = (StatusEnum) Enum.Parse(typeof(StatusEnum), "Active", true); I tend to simplify this with: public static T ParseEnum (string value) { return (T) Enum.Parse(typeof(T), value, true); } Then I can do: StatusEnum MyStatus = EnumUtil.ParseEnum ("Active"); One option suggested in the comments is to add an extension, which is simple enough: public static T ToEnum (this string value) { return (T) Enum.Parse(typeof(T), value, true); } StatusEnum MyStatus = "Active".ToEnum (); Finally, you may want to have a default enum to use if the string cannot be parsed: public static T ToEnum (this string value, T defaultValue) { if (string.IsNullOrEmpty(value)) { return defaultValue; } T result; return Enum.TryParse (value, true, out result)? result: defaultValue; } Which makes this the call: StatusEnum MyStatus = "Active".ToEnum(StatusEnum.None); However, I would be careful adding an extension method like this to string as (without namespace control) it will appear on all instances of string whether they hold an enum or not (so 1234.ToString().ToEnum(StatusEnum.None) would be valid but nonsensical). It's often be best to avoid cluttering Microsoft's core classes with extra methods that only apply in very specific contexts unless your entire development team has a very good understanding of what those extensions do.
Convert a string to an enum in C# What's the best way to convert a string to an enumeration value in C#? I have an HTML select tag containing the values of an enumeration. When the page is posted, I want to pick up the value (which will be in the form of a string) and convert it to the corresponding enumeration value. In an ideal world, I could do something like this: StatusEnum MyStatus = StatusEnum.Parse("Active"); but that isn't valid code.
TITLE: Convert a string to an enum in C# QUESTION: What's the best way to convert a string to an enumeration value in C#? I have an HTML select tag containing the values of an enumeration. When the page is posted, I want to pick up the value (which will be in the form of a string) and convert it to the corresponding enumeration value. In an ideal world, I could do something like this: StatusEnum MyStatus = StatusEnum.Parse("Active"); but that isn't valid code. ANSWER: In.NET Core and.NET Framework ≥4.0 there is a generic parse method: Enum.TryParse("Active", out StatusEnum myStatus); This also includes C#7's new inline out variables, so this does the try-parse, conversion to the explicit enum type and initialises+populates the myStatus variable. If you have access to C#7 and the latest.NET this is the best way. Original Answer In.NET it's rather ugly (until 4 or above): StatusEnum MyStatus = (StatusEnum) Enum.Parse(typeof(StatusEnum), "Active", true); I tend to simplify this with: public static T ParseEnum (string value) { return (T) Enum.Parse(typeof(T), value, true); } Then I can do: StatusEnum MyStatus = EnumUtil.ParseEnum ("Active"); One option suggested in the comments is to add an extension, which is simple enough: public static T ToEnum (this string value) { return (T) Enum.Parse(typeof(T), value, true); } StatusEnum MyStatus = "Active".ToEnum (); Finally, you may want to have a default enum to use if the string cannot be parsed: public static T ToEnum (this string value, T defaultValue) { if (string.IsNullOrEmpty(value)) { return defaultValue; } T result; return Enum.TryParse (value, true, out result)? result: defaultValue; } Which makes this the call: StatusEnum MyStatus = "Active".ToEnum(StatusEnum.None); However, I would be careful adding an extension method like this to string as (without namespace control) it will appear on all instances of string whether they hold an enum or not (so 1234.ToString().ToEnum(StatusEnum.None) would be valid but nonsensical). It's often be best to avoid cluttering Microsoft's core classes with extra methods that only apply in very specific contexts unless your entire development team has a very good understanding of what those extensions do.
[ "c#", "string", "enums" ]
1,293
2,120
1,227,900
29
0
2008-08-19T12:51:55.043000
2008-08-19T12:54:28.533000
16,110
16,130
C# Auto Clearing Winform Textbox
I have a user that want to be able to select a textbox and have the current text selected so that he doesn't have to highlight it all in order to change the contents. The contents need to be handle when enter is pushed. That part I think I have figured out but any suggestions would be welcome. The part I need help with is that once enter has been pushed, any entry into the textbox should clear the contents again. Edit: The textbox controls an piece of RF hardware. What the user wants to be able to do is enter a setting and press enter. The setting is sent to the hardware. Without doing anything else the user wants to be able to type in a new setting and press enter again.
Hook into the KeyPress event on the TextBox, and when it encounters the Enter key, run your hardware setting code, and then highlight the full text of the textbox again (see below) - Windows will take care of clearing the text with the next keystroke for you. TextBox1.Select(0, TextBox1.Text.Length);
C# Auto Clearing Winform Textbox I have a user that want to be able to select a textbox and have the current text selected so that he doesn't have to highlight it all in order to change the contents. The contents need to be handle when enter is pushed. That part I think I have figured out but any suggestions would be welcome. The part I need help with is that once enter has been pushed, any entry into the textbox should clear the contents again. Edit: The textbox controls an piece of RF hardware. What the user wants to be able to do is enter a setting and press enter. The setting is sent to the hardware. Without doing anything else the user wants to be able to type in a new setting and press enter again.
TITLE: C# Auto Clearing Winform Textbox QUESTION: I have a user that want to be able to select a textbox and have the current text selected so that he doesn't have to highlight it all in order to change the contents. The contents need to be handle when enter is pushed. That part I think I have figured out but any suggestions would be welcome. The part I need help with is that once enter has been pushed, any entry into the textbox should clear the contents again. Edit: The textbox controls an piece of RF hardware. What the user wants to be able to do is enter a setting and press enter. The setting is sent to the hardware. Without doing anything else the user wants to be able to type in a new setting and press enter again. ANSWER: Hook into the KeyPress event on the TextBox, and when it encounters the Enter key, run your hardware setting code, and then highlight the full text of the textbox again (see below) - Windows will take care of clearing the text with the next keystroke for you. TextBox1.Select(0, TextBox1.Text.Length);
[ "c#", "winforms", "textbox" ]
5
4
3,817
2
0
2008-08-19T12:55:46.423000
2008-08-19T13:12:51.867000
16,113
16,696
Software for Webserver Log Analysis?
Can I get some recommendations (preferably with some reasons) for good log analysis software for Apache 2.2 access log files? I have heard of Webalizer and AWStats, but have never really used any of them, and would like to know: What they can do Why they are useful Interesting uses for them Any and all comments and thoughts are welcome.
AWStats and Webalizer are both good and free (I think both free speech as well as free beer). I generally prefer the look of AWStats - it has a nice modern look whereas Webalizer looks like something created in about 1992. They both give roughly the same information which includes: Most frequently accessed pages Which hosts (IPs and Domain Names) visitors come from Proportion of users using different browsers Proportion of downloads of different file types All of this information is usually viewable on a hour by hour, day by day, month by month and year by year basis. Normally the raw data is available but also with bar charts and pie charts. Both AWStats and Webalizer will (I think) try and work out where your visitors come from by using services such as GeoIP, although I never bothered to set this up. Some also try to work out what order people have visited pages in and things like that - but that is very difficult to do so the results are guesses at the best. I generally find them both useful - even if just to get an overview of what is going on with my server and who is accessing it. They are both relatively easy to install - although I seem to remember Webalizer being a little easier than AWStats, and they both have varied configuration options to let you decide exactly what you want to get out of them. For more information see their sites at awstats.sourceforge.net/ and http://www.webalizer.org/. Hope that helps. Robin
Software for Webserver Log Analysis? Can I get some recommendations (preferably with some reasons) for good log analysis software for Apache 2.2 access log files? I have heard of Webalizer and AWStats, but have never really used any of them, and would like to know: What they can do Why they are useful Interesting uses for them Any and all comments and thoughts are welcome.
TITLE: Software for Webserver Log Analysis? QUESTION: Can I get some recommendations (preferably with some reasons) for good log analysis software for Apache 2.2 access log files? I have heard of Webalizer and AWStats, but have never really used any of them, and would like to know: What they can do Why they are useful Interesting uses for them Any and all comments and thoughts are welcome. ANSWER: AWStats and Webalizer are both good and free (I think both free speech as well as free beer). I generally prefer the look of AWStats - it has a nice modern look whereas Webalizer looks like something created in about 1992. They both give roughly the same information which includes: Most frequently accessed pages Which hosts (IPs and Domain Names) visitors come from Proportion of users using different browsers Proportion of downloads of different file types All of this information is usually viewable on a hour by hour, day by day, month by month and year by year basis. Normally the raw data is available but also with bar charts and pie charts. Both AWStats and Webalizer will (I think) try and work out where your visitors come from by using services such as GeoIP, although I never bothered to set this up. Some also try to work out what order people have visited pages in and things like that - but that is very difficult to do so the results are guesses at the best. I generally find them both useful - even if just to get an overview of what is going on with my server and who is accessing it. They are both relatively easy to install - although I seem to remember Webalizer being a little easier than AWStats, and they both have varied configuration options to let you decide exactly what you want to get out of them. For more information see their sites at awstats.sourceforge.net/ and http://www.webalizer.org/. Hope that helps. Robin
[ "apache", "logging", "logfile-analysis" ]
25
13
9,226
10
0
2008-08-19T12:59:40.550000
2008-08-19T18:33:24.933000
16,114
16,134
C# Include Derived Control in Toolbox
This is in reference to my other question Auto Clearing Textbox. If I choose to derive a new TextBox control from TextBox instead of implement a user control just containing my Textbox, how would I include that in the toolbox.
Right-click the toolbox, click "Choose Items" from the context menu, browse to your DLL, and select it. To extend on Greg's answer... Just to clarify, you cannot add a user control to the tool box if the code for it is in the same project that you want to use it in. For some reason MS has never added this ability, which would make sense since we don't want to always have to create a User Control Library DLL everytime we want to use a user control. So, to get it in your tool box, you have to first create a separate "User Control Library" project (which can be in the same solution!) and then do what Greg said.
C# Include Derived Control in Toolbox This is in reference to my other question Auto Clearing Textbox. If I choose to derive a new TextBox control from TextBox instead of implement a user control just containing my Textbox, how would I include that in the toolbox.
TITLE: C# Include Derived Control in Toolbox QUESTION: This is in reference to my other question Auto Clearing Textbox. If I choose to derive a new TextBox control from TextBox instead of implement a user control just containing my Textbox, how would I include that in the toolbox. ANSWER: Right-click the toolbox, click "Choose Items" from the context menu, browse to your DLL, and select it. To extend on Greg's answer... Just to clarify, you cannot add a user control to the tool box if the code for it is in the same project that you want to use it in. For some reason MS has never added this ability, which would make sense since we don't want to always have to create a User Control Library DLL everytime we want to use a user control. So, to get it in your tool box, you have to first create a separate "User Control Library" project (which can be in the same solution!) and then do what Greg said.
[ "c#", "visual-studio", "winforms", "textbox" ]
7
8
3,619
2
0
2008-08-19T12:59:41.690000
2008-08-19T13:16:12.793000
16,140
16,817
What's the best way to get started with OSGI?
What makes a module/service/bit of application functionality a particularly good candidate for an OSGi module? I'm interested in using OSGi in my applications. We're a Java shop and we use Spring pretty extensively, so I'm leaning toward using Spring Dynamic Modules for OSGi(tm) Service Platforms. I'm looking for a good way to incorporate a little bit of OSGi into an application as a trial. Has anyone here used this or a similar OSGi technology? Are there any pitfalls? @Nicolas - Thanks, I've seen that one. It's a good tutorial, but I'm looking more for ideas on how to do my first "real" OSGi bundle, as opposed to a Hello World example. @david - Thanks for the link! Ideally, with a greenfield app, I'd design the whole thing to be dynamic. What I'm looking for right now, though, is to introduce it in a small piece of an existing application. Assuming I can pick any piece of the app, what are some factors to consider that would make that piece better or worse as an OSGi guinea pig?
Well, since you can not have one part OSGi and one part non-OSGi you'll need to make your entire app OSGi. In its simplest form you make a single OSGi bundle out of your entire application. Clearly this is not a best practice but it can be useful to get a feel for deploying a bundle in an OSGi container (Equinox, Felix, Knoplerfish, etc). To take it to the next level you'll want to start splitting your app into components, components should typically have a set of responsibilities that can be isolated from the rest of your application through a set of interfaces and class dependencies. Identifying these purely by hand can range from rather straightforward for a well designed highly cohesive but loosely coupled application to a nightmare for interlocked source code that you are not familiar with. Some help can come from tools like JDepend which can show you the coupling of Java packages against other packages/classes in your system. A package with low efferent coupling should be easier to extract into an OSGi bundle than one with high efferent coupling. Even more architectural insight can be had with pro tools like Structure 101. Purely on a technical level, working daily with an application that consists of 160 OSGi bundles and using Spring DM I can confirm that the transition from "normal" Spring to Spring DM is largely pain free. The extra namespace and the fact that you can (and should) isolate your OSGi specific Spring configuration in separate files makes it even easier to have both with and without OSGi deployment scenarios. OSGi is a deep and wide component model, documentation I recommend: OSGi R4 Specification: Get the PDFs of the Core and Compendium specification, they are canonical, authoritative and very readable. Have a shortcut to them handy at all times, you will consult them. Read up on OSGi best practices, there is a large set of things you can do but a somewhat smaller set of things you should do and there are some things you should never do (DynamicImport: * for example). Some links: OSGi best practices and using Apache Felix Peter Kriens and BJ Hargrave in a Sun presentation on OSGi best practices one key OSGi concept are Services, learn why and how they supplant the Listener pattern with the Whiteboard pattern The Spring DM Google Group is very responsive and friendly in my experience The Spring DM Google Group is no longer active and has moved to Eclipse.org as the Gemini Blueprint project which has a forum here.
What's the best way to get started with OSGI? What makes a module/service/bit of application functionality a particularly good candidate for an OSGi module? I'm interested in using OSGi in my applications. We're a Java shop and we use Spring pretty extensively, so I'm leaning toward using Spring Dynamic Modules for OSGi(tm) Service Platforms. I'm looking for a good way to incorporate a little bit of OSGi into an application as a trial. Has anyone here used this or a similar OSGi technology? Are there any pitfalls? @Nicolas - Thanks, I've seen that one. It's a good tutorial, but I'm looking more for ideas on how to do my first "real" OSGi bundle, as opposed to a Hello World example. @david - Thanks for the link! Ideally, with a greenfield app, I'd design the whole thing to be dynamic. What I'm looking for right now, though, is to introduce it in a small piece of an existing application. Assuming I can pick any piece of the app, what are some factors to consider that would make that piece better or worse as an OSGi guinea pig?
TITLE: What's the best way to get started with OSGI? QUESTION: What makes a module/service/bit of application functionality a particularly good candidate for an OSGi module? I'm interested in using OSGi in my applications. We're a Java shop and we use Spring pretty extensively, so I'm leaning toward using Spring Dynamic Modules for OSGi(tm) Service Platforms. I'm looking for a good way to incorporate a little bit of OSGi into an application as a trial. Has anyone here used this or a similar OSGi technology? Are there any pitfalls? @Nicolas - Thanks, I've seen that one. It's a good tutorial, but I'm looking more for ideas on how to do my first "real" OSGi bundle, as opposed to a Hello World example. @david - Thanks for the link! Ideally, with a greenfield app, I'd design the whole thing to be dynamic. What I'm looking for right now, though, is to introduce it in a small piece of an existing application. Assuming I can pick any piece of the app, what are some factors to consider that would make that piece better or worse as an OSGi guinea pig? ANSWER: Well, since you can not have one part OSGi and one part non-OSGi you'll need to make your entire app OSGi. In its simplest form you make a single OSGi bundle out of your entire application. Clearly this is not a best practice but it can be useful to get a feel for deploying a bundle in an OSGi container (Equinox, Felix, Knoplerfish, etc). To take it to the next level you'll want to start splitting your app into components, components should typically have a set of responsibilities that can be isolated from the rest of your application through a set of interfaces and class dependencies. Identifying these purely by hand can range from rather straightforward for a well designed highly cohesive but loosely coupled application to a nightmare for interlocked source code that you are not familiar with. Some help can come from tools like JDepend which can show you the coupling of Java packages against other packages/classes in your system. A package with low efferent coupling should be easier to extract into an OSGi bundle than one with high efferent coupling. Even more architectural insight can be had with pro tools like Structure 101. Purely on a technical level, working daily with an application that consists of 160 OSGi bundles and using Spring DM I can confirm that the transition from "normal" Spring to Spring DM is largely pain free. The extra namespace and the fact that you can (and should) isolate your OSGi specific Spring configuration in separate files makes it even easier to have both with and without OSGi deployment scenarios. OSGi is a deep and wide component model, documentation I recommend: OSGi R4 Specification: Get the PDFs of the Core and Compendium specification, they are canonical, authoritative and very readable. Have a shortcut to them handy at all times, you will consult them. Read up on OSGi best practices, there is a large set of things you can do but a somewhat smaller set of things you should do and there are some things you should never do (DynamicImport: * for example). Some links: OSGi best practices and using Apache Felix Peter Kriens and BJ Hargrave in a Sun presentation on OSGi best practices one key OSGi concept are Services, learn why and how they supplant the Listener pattern with the Whiteboard pattern The Spring DM Google Group is very responsive and friendly in my experience The Spring DM Google Group is no longer active and has moved to Eclipse.org as the Gemini Blueprint project which has a forum here.
[ "java", "spring", "osgi" ]
45
41
12,116
8
0
2008-08-19T13:20:49.010000
2008-08-19T19:51:39.143000
16,142
16,163
What do "branch", "tag" and "trunk" mean in Subversion repositories?
I've seen these words a lot around Subversion (and I guess general repository) discussions. I have been using SVN for my projects for the last few years, but I've never grasped the complete concept of these directories. What do they mean?
Hmm, not sure I agree with Nick re tag being similar to a branch. A tag is just a marker Trunk would be the main body of development, originating from the start of the project until the present. Branch will be a copy of code derived from a certain point in the trunk that is used for applying major changes to the code while preserving the integrity of the code in the trunk. If the major changes work according to plan, they are usually merged back into the trunk. Tag will be a point in time on the trunk or a branch that you wish to preserve. The two main reasons for preservation would be that either this is a major release of the software, whether alpha, beta, RC or RTM, or this is the most stable point of the software before major revisions on the trunk were applied. In open source projects, major branches that are not accepted into the trunk by the project stakeholders can become the bases for forks -- e.g., totally separate projects that share a common origin with other source code. The branch and tag subtrees are distinguished from the trunk in the following ways: Subversion allows sysadmins to create hook scripts which are triggered for execution when certain events occur; for instance, committing a change to the repository. It is very common for a typical Subversion repository implementation to treat any path containing "/tag/" to be write-protected after creation; the net result is that tags, once created, are immutable (at least to "ordinary" users). This is done via the hook scripts, which enforce the immutability by preventing further changes if tag is a parent node of the changed object. Subversion also has added features, since version 1.5, relating to "branch merge tracking" so that changes committed to a branch can be merged back into the trunk with support for incremental, "smart" merging.
What do "branch", "tag" and "trunk" mean in Subversion repositories? I've seen these words a lot around Subversion (and I guess general repository) discussions. I have been using SVN for my projects for the last few years, but I've never grasped the complete concept of these directories. What do they mean?
TITLE: What do "branch", "tag" and "trunk" mean in Subversion repositories? QUESTION: I've seen these words a lot around Subversion (and I guess general repository) discussions. I have been using SVN for my projects for the last few years, but I've never grasped the complete concept of these directories. What do they mean? ANSWER: Hmm, not sure I agree with Nick re tag being similar to a branch. A tag is just a marker Trunk would be the main body of development, originating from the start of the project until the present. Branch will be a copy of code derived from a certain point in the trunk that is used for applying major changes to the code while preserving the integrity of the code in the trunk. If the major changes work according to plan, they are usually merged back into the trunk. Tag will be a point in time on the trunk or a branch that you wish to preserve. The two main reasons for preservation would be that either this is a major release of the software, whether alpha, beta, RC or RTM, or this is the most stable point of the software before major revisions on the trunk were applied. In open source projects, major branches that are not accepted into the trunk by the project stakeholders can become the bases for forks -- e.g., totally separate projects that share a common origin with other source code. The branch and tag subtrees are distinguished from the trunk in the following ways: Subversion allows sysadmins to create hook scripts which are triggered for execution when certain events occur; for instance, committing a change to the repository. It is very common for a typical Subversion repository implementation to treat any path containing "/tag/" to be write-protected after creation; the net result is that tags, once created, are immutable (at least to "ordinary" users). This is done via the hook scripts, which enforce the immutability by preventing further changes if tag is a parent node of the changed object. Subversion also has added features, since version 1.5, relating to "branch merge tracking" so that changes committed to a branch can be merged back into the trunk with support for incremental, "smart" merging.
[ "svn", "branch", "terminology", "trunk" ]
1,211
927
462,402
16
0
2008-08-19T13:22:03.750000
2008-08-19T13:35:21.603000
16,155
16,245
Making one interface overwrite a method it inherits from another interface in PHP
Is there a way in PHP to overwrite a method declared by one interface in an interface extending that interface? The Example: I'm probably doing something wrong, but here is what I have: interface iVendor{ public function __construct($vendors_no = null); public function getName(); public function getVendors_no(); public function getZip(); public function getCountryCode(); public function setName($name); public function setVendors_no($vendors_no); public function setZip($zip); public function setCountryCode($countryCode); } interface iShipper extends iVendor{ public function __construct($vendors_no = null, $shipment = null); public function getTransitTime($shipment = null); public function getTransitCost($shipment = null); public function getCurrentShipment(); public function setCurrentShipment($shipment); public function getStatus($shipment = null); } Normally in PHP, when you extend something, you can overwrite any method contained therein (right?). However, when one interface extends another, it won't let you. Unless I'm thinking about this wrong... When I implement the iShipper interface, I don't have to make the Shipper object extend the Vendor object (that implements the iVendor interface). I just say: class FedEx implements iShipper{} and make FedEx implement all of the methods from iVendor and iShipper. However, I need the __construct functions in iVendor and iShipper to be unique. I know I could take out the $shipment = null, but then it wouldn't be as convenient to create Shippers (by just passing in the vendors_no and the shipment while instantiating). Anyone know how to make this work? My fallback is to have to set the shipment by calling $shipper->setShipment($shipment); on the Shipper after I instantiate it, but I'm hoping for a way to get around having to do that... A little more explanation for the curious: The FedEx Object has methods that go to the FedEx site (using cURL) and gets an estimate for the Shipment in question. I have a UPS Object, a BAXGlobal Object, a Conway Object, etc. Each one has COMPLETELY different methods for actually getting the shipping estimate, but all the system needs to know is that they are a "shipper" and that the methods listed in the interface are callable on them (so it can treat them all exactly the same, and loop through them in a "shippers" array calling getTransitX() to find the best shipper for a shipment). Each "Shipper" is also a "Vendor" though, and is treated as such in other parts of the system (getting and putting in the DB, etc. Our data design is a pile of crap, so FedEx is stored right alongside companies like Dunder Mifflin in the "Vendors" table, which means it gets to have all the properties of every other Vendor, but needs the extra properties and methods supplied by iShipper).
@cmcculloh Yeah, in Java you don't define constructors in Interfaces. This allows you to both extend interfaces and also have a class that implements multiple interfaces (both allowed, and very useful in many cases) without worrying about having to satisfy a particular constructor. EDIT: Here's my new model: A. Each interface no longer has a constructor method. B. All Shippers (UPS, FedEx, etc) now implement iShipper (which extends iVendor) and extend the abstract class Shipper (which has all common non-abstract methods for shippers defined in it, getName(), getZip() etc). C. Each Shipper has it's own unique _construct method which overwrites the abstract __construct($vendors_no = null, $shipment = null) method contained in Shipper (I don't remember why I'm allowing those to be optional now though. I'd have to go back through my documentation...). So: interface iVendor{ public function getName(); public function getVendors_no(); public function getZip(); public function getCountryCode(); public function setName($name); public function setVendors_no($vendors_no); public function setZip($zip); public function setCountryCode($countryCode); } interface iShipper extends iVendor{ public function getTransitTime($shipment = null); public function getTransitCost($shipment = null); public function getCurrentShipment(); public function setCurrentShipment($shipment); public function getStatus($shipment = null); } abstract class Shipper implements iShipper{ abstract public function __construct($vendors_no = null, $shipment = null); //a bunch of non-abstract common methods... } class FedEx extends Shipper implements iShipper{ public function __construct($vendors_no = null, $shipment = null){ //a bunch of setup code... } //all my FedEx specific methods... } Thanks for the help! ps. since I have now added this to "your" answer, if there is something about it you don't like/think should be different, feel free to change it...
Making one interface overwrite a method it inherits from another interface in PHP Is there a way in PHP to overwrite a method declared by one interface in an interface extending that interface? The Example: I'm probably doing something wrong, but here is what I have: interface iVendor{ public function __construct($vendors_no = null); public function getName(); public function getVendors_no(); public function getZip(); public function getCountryCode(); public function setName($name); public function setVendors_no($vendors_no); public function setZip($zip); public function setCountryCode($countryCode); } interface iShipper extends iVendor{ public function __construct($vendors_no = null, $shipment = null); public function getTransitTime($shipment = null); public function getTransitCost($shipment = null); public function getCurrentShipment(); public function setCurrentShipment($shipment); public function getStatus($shipment = null); } Normally in PHP, when you extend something, you can overwrite any method contained therein (right?). However, when one interface extends another, it won't let you. Unless I'm thinking about this wrong... When I implement the iShipper interface, I don't have to make the Shipper object extend the Vendor object (that implements the iVendor interface). I just say: class FedEx implements iShipper{} and make FedEx implement all of the methods from iVendor and iShipper. However, I need the __construct functions in iVendor and iShipper to be unique. I know I could take out the $shipment = null, but then it wouldn't be as convenient to create Shippers (by just passing in the vendors_no and the shipment while instantiating). Anyone know how to make this work? My fallback is to have to set the shipment by calling $shipper->setShipment($shipment); on the Shipper after I instantiate it, but I'm hoping for a way to get around having to do that... A little more explanation for the curious: The FedEx Object has methods that go to the FedEx site (using cURL) and gets an estimate for the Shipment in question. I have a UPS Object, a BAXGlobal Object, a Conway Object, etc. Each one has COMPLETELY different methods for actually getting the shipping estimate, but all the system needs to know is that they are a "shipper" and that the methods listed in the interface are callable on them (so it can treat them all exactly the same, and loop through them in a "shippers" array calling getTransitX() to find the best shipper for a shipment). Each "Shipper" is also a "Vendor" though, and is treated as such in other parts of the system (getting and putting in the DB, etc. Our data design is a pile of crap, so FedEx is stored right alongside companies like Dunder Mifflin in the "Vendors" table, which means it gets to have all the properties of every other Vendor, but needs the extra properties and methods supplied by iShipper).
TITLE: Making one interface overwrite a method it inherits from another interface in PHP QUESTION: Is there a way in PHP to overwrite a method declared by one interface in an interface extending that interface? The Example: I'm probably doing something wrong, but here is what I have: interface iVendor{ public function __construct($vendors_no = null); public function getName(); public function getVendors_no(); public function getZip(); public function getCountryCode(); public function setName($name); public function setVendors_no($vendors_no); public function setZip($zip); public function setCountryCode($countryCode); } interface iShipper extends iVendor{ public function __construct($vendors_no = null, $shipment = null); public function getTransitTime($shipment = null); public function getTransitCost($shipment = null); public function getCurrentShipment(); public function setCurrentShipment($shipment); public function getStatus($shipment = null); } Normally in PHP, when you extend something, you can overwrite any method contained therein (right?). However, when one interface extends another, it won't let you. Unless I'm thinking about this wrong... When I implement the iShipper interface, I don't have to make the Shipper object extend the Vendor object (that implements the iVendor interface). I just say: class FedEx implements iShipper{} and make FedEx implement all of the methods from iVendor and iShipper. However, I need the __construct functions in iVendor and iShipper to be unique. I know I could take out the $shipment = null, but then it wouldn't be as convenient to create Shippers (by just passing in the vendors_no and the shipment while instantiating). Anyone know how to make this work? My fallback is to have to set the shipment by calling $shipper->setShipment($shipment); on the Shipper after I instantiate it, but I'm hoping for a way to get around having to do that... A little more explanation for the curious: The FedEx Object has methods that go to the FedEx site (using cURL) and gets an estimate for the Shipment in question. I have a UPS Object, a BAXGlobal Object, a Conway Object, etc. Each one has COMPLETELY different methods for actually getting the shipping estimate, but all the system needs to know is that they are a "shipper" and that the methods listed in the interface are callable on them (so it can treat them all exactly the same, and loop through them in a "shippers" array calling getTransitX() to find the best shipper for a shipment). Each "Shipper" is also a "Vendor" though, and is treated as such in other parts of the system (getting and putting in the DB, etc. Our data design is a pile of crap, so FedEx is stored right alongside companies like Dunder Mifflin in the "Vendors" table, which means it gets to have all the properties of every other Vendor, but needs the extra properties and methods supplied by iShipper). ANSWER: @cmcculloh Yeah, in Java you don't define constructors in Interfaces. This allows you to both extend interfaces and also have a class that implements multiple interfaces (both allowed, and very useful in many cases) without worrying about having to satisfy a particular constructor. EDIT: Here's my new model: A. Each interface no longer has a constructor method. B. All Shippers (UPS, FedEx, etc) now implement iShipper (which extends iVendor) and extend the abstract class Shipper (which has all common non-abstract methods for shippers defined in it, getName(), getZip() etc). C. Each Shipper has it's own unique _construct method which overwrites the abstract __construct($vendors_no = null, $shipment = null) method contained in Shipper (I don't remember why I'm allowing those to be optional now though. I'd have to go back through my documentation...). So: interface iVendor{ public function getName(); public function getVendors_no(); public function getZip(); public function getCountryCode(); public function setName($name); public function setVendors_no($vendors_no); public function setZip($zip); public function setCountryCode($countryCode); } interface iShipper extends iVendor{ public function getTransitTime($shipment = null); public function getTransitCost($shipment = null); public function getCurrentShipment(); public function setCurrentShipment($shipment); public function getStatus($shipment = null); } abstract class Shipper implements iShipper{ abstract public function __construct($vendors_no = null, $shipment = null); //a bunch of non-abstract common methods... } class FedEx extends Shipper implements iShipper{ public function __construct($vendors_no = null, $shipment = null){ //a bunch of setup code... } //all my FedEx specific methods... } Thanks for the help! ps. since I have now added this to "your" answer, if there is something about it you don't like/think should be different, feel free to change it...
[ "php", "oop", "interface", "extends" ]
2
6
2,464
2
0
2008-08-19T13:32:12.247000
2008-08-19T14:17:20.923000
16,161
290,184
cvs checkin: Identifying the names of the files checked in
The cvsnt manual provides a detailed list of parameters than can be passed to the postcommand module, but none of them specify the file name. Is anybody aware of an option not listed here that would provide the name of the file being checked in? ColinYounger - The %c command is just the command, e.g. "Commit"
The answer (thanks to an answer to a different question by Sally ) is to not use the postcommand file, but use the loginfo file and provide the arguments ‘%{s}’
cvs checkin: Identifying the names of the files checked in The cvsnt manual provides a detailed list of parameters than can be passed to the postcommand module, but none of them specify the file name. Is anybody aware of an option not listed here that would provide the name of the file being checked in? ColinYounger - The %c command is just the command, e.g. "Commit"
TITLE: cvs checkin: Identifying the names of the files checked in QUESTION: The cvsnt manual provides a detailed list of parameters than can be passed to the postcommand module, but none of them specify the file name. Is anybody aware of an option not listed here that would provide the name of the file being checked in? ColinYounger - The %c command is just the command, e.g. "Commit" ANSWER: The answer (thanks to an answer to a different question by Sally ) is to not use the postcommand file, but use the loginfo file and provide the arguments ‘%{s}’
[ "cvs", "cvsnt" ]
1
1
259
2
0
2008-08-19T13:34:24.710000
2008-11-14T14:10:07.440000
16,164
16,746
RSS/Atom for professional use
I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news? For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons? Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway? edit @ abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki )
Some of my team's new systems generate RSS feeds that the developers syndicate. These feeds push out events that interest the developers at certain times and the information is controlled using different loggers. Thus when debugging you can get the debugging feed, when you want to see completed transactions you go to the transactions feeds etc. This allows all the developers to get the information they want in a comfortable way and without any need to mess a lot with configuration. If you don't want to get it there's no need to remove yourself from a mailing list or edit a configuration file - simply remove the feed and be done with it. Very cool, and the idea was stolen from Pragmatic Project Automation.
RSS/Atom for professional use I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news? For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons? Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway? edit @ abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki )
TITLE: RSS/Atom for professional use QUESTION: I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news? For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons? Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway? edit @ abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki ) ANSWER: Some of my team's new systems generate RSS feeds that the developers syndicate. These feeds push out events that interest the developers at certain times and the information is controlled using different loggers. Thus when debugging you can get the debugging feed, when you want to see completed transactions you go to the transactions feeds etc. This allows all the developers to get the information they want in a comfortable way and without any need to mess a lot with configuration. If you don't want to get it there's no need to remove yourself from a mailing list or edit a configuration file - simply remove the feed and be done with it. Very cool, and the idea was stolen from Pragmatic Project Automation.
[ "rss", "use-case", "feed", "atom-feed" ]
6
4
1,179
9
0
2008-08-19T13:36:19.040000
2008-08-19T19:02:29.133000
16,167
16,182
Good refactoring support for C++
The Visual Studio refactoring support for C# is quite good nowadays (though not half as good as some Java IDE's I've seen already) but I'm really missing C++ support. I have seen Refactor! and am currently trying it out, but maybe one of you guys know a better tool or plugin? I've been working with Visual Assist X now for a week or two and got totally addicted. Thanks for the tip, I'll try to convince my boss to get me a license at work too. I've been bughunting for a few days since Visual Assist X kept messing up my Visual Studio after a few specific refactorings, It took me (and customer support) a week to hunt down but let's say for now that Visual Assist X is not a good combination with ClipX.
Visual Assist X by Whole Tomato software is not free, but it's absolutely worth the money if you use Visual Studio for C++. http://www.wholetomato.com/
Good refactoring support for C++ The Visual Studio refactoring support for C# is quite good nowadays (though not half as good as some Java IDE's I've seen already) but I'm really missing C++ support. I have seen Refactor! and am currently trying it out, but maybe one of you guys know a better tool or plugin? I've been working with Visual Assist X now for a week or two and got totally addicted. Thanks for the tip, I'll try to convince my boss to get me a license at work too. I've been bughunting for a few days since Visual Assist X kept messing up my Visual Studio after a few specific refactorings, It took me (and customer support) a week to hunt down but let's say for now that Visual Assist X is not a good combination with ClipX.
TITLE: Good refactoring support for C++ QUESTION: The Visual Studio refactoring support for C# is quite good nowadays (though not half as good as some Java IDE's I've seen already) but I'm really missing C++ support. I have seen Refactor! and am currently trying it out, but maybe one of you guys know a better tool or plugin? I've been working with Visual Assist X now for a week or two and got totally addicted. Thanks for the tip, I'll try to convince my boss to get me a license at work too. I've been bughunting for a few days since Visual Assist X kept messing up my Visual Studio after a few specific refactorings, It took me (and customer support) a week to hunt down but let's say for now that Visual Assist X is not a good combination with ClipX. ANSWER: Visual Assist X by Whole Tomato software is not free, but it's absolutely worth the money if you use Visual Studio for C++. http://www.wholetomato.com/
[ "c++", "visual-studio", "refactoring" ]
27
37
18,134
6
0
2008-08-19T13:39:29.673000
2008-08-19T13:46:52.420000
16,178
16,229
Best Way To Determine If .NET 3.5 Is Installed
I need to programatically determine whether.NET 3.5 is installed. I thought it would be easy: <% Response.Write(Environment.Version.ToString()); %> Which returns "2.0.50727.1434" so no such luck... In my research I have that there are some rather obscure registry keys I can look at but I'm not sure if that is the route to go. Does anyone have any suggestions?
You could try: static bool HasNet35() { try { AppDomain.CurrentDomain.Load( "System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"); return true; } catch { return false; } } @ Nick: Good question, I'll try it in a bit. Kev
Best Way To Determine If .NET 3.5 Is Installed I need to programatically determine whether.NET 3.5 is installed. I thought it would be easy: <% Response.Write(Environment.Version.ToString()); %> Which returns "2.0.50727.1434" so no such luck... In my research I have that there are some rather obscure registry keys I can look at but I'm not sure if that is the route to go. Does anyone have any suggestions?
TITLE: Best Way To Determine If .NET 3.5 Is Installed QUESTION: I need to programatically determine whether.NET 3.5 is installed. I thought it would be easy: <% Response.Write(Environment.Version.ToString()); %> Which returns "2.0.50727.1434" so no such luck... In my research I have that there are some rather obscure registry keys I can look at but I'm not sure if that is the route to go. Does anyone have any suggestions? ANSWER: You could try: static bool HasNet35() { try { AppDomain.CurrentDomain.Load( "System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"); return true; } catch { return false; } } @ Nick: Good question, I'll try it in a bit. Kev
[ ".net", ".net-3.5", "installation", "registry" ]
15
3
14,500
9
0
2008-08-19T13:45:03.017000
2008-08-19T14:09:42.313000