question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,332 | 12,408 | In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function? | I am looking for a more technical explanation than the OS calls the function. Is there a website or book? | The.exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the.EXE file into RAM, and then jumps to the entry point. As others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc. If you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime. | In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function? I am looking for a more technical explanation than the OS calls the function. Is there a website or book? | TITLE:
In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function?
QUESTION:
I am looking for a more technical explanation than the OS calls the function. Is there a website or book?
ANSWER:
The.exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the.EXE file into RAM, and then jumps to the entry point. As others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc. If you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime. | [
"c++",
"c",
"program-entry-point"
] | 19 | 27 | 4,092 | 5 | 0 | 2008-08-15T15:08:48.560000 | 2008-08-15T15:50:12.457000 |
12,348 | 12,349 | PHP / cURL on Windows install: "The specified module could not be found." | I'm running PHP 5.2.3 on Windows 2000 Server with IIS 5. I'm trying to get cURL working, so in my php.ini file, I have this line: extension_dir ="F:\PHP\ext" And later, I have: extension=php_curl.dll The file F:\PHP\ext\php_curl.dll exists, but when I try to run any PHP script, I get this in the error log: PHP Warning: PHP Startup: Unable to load dynamic library 'F:\PHP\ext \php_curl.dll' - The specified module could not be found. in Unknown on line 0 | Problem solved! Although the error message said The specified module could not be found, this is a little misleading -- it's not that it couldn't find php_curl.dll, but rather it couldn't find a module that php_curl.dll required. The 2 DLLs it requires are libeay32.dll and SSLeay32.dll. So, you have to put those 2 DLLs somewhere in your PATH (e.g., C:\Windows\system32 ). That's all there is to it. However, even that did not work for me initially. So I downloaded the Windows zip of the latest version of PHP, which includes all the necessary DLLs. I didn't reinstall PHP, I just copied all of the DLLs in the "ext" folder to my PHP extensions folder (as specified in the extension_dir variable in php.ini ), and I copied the versions of libeay32.dll and SSLeay32.dll from the PHP download into my System32 directory. I also did an iisreset, but I don't know if that was necessary. | PHP / cURL on Windows install: "The specified module could not be found." I'm running PHP 5.2.3 on Windows 2000 Server with IIS 5. I'm trying to get cURL working, so in my php.ini file, I have this line: extension_dir ="F:\PHP\ext" And later, I have: extension=php_curl.dll The file F:\PHP\ext\php_curl.dll exists, but when I try to run any PHP script, I get this in the error log: PHP Warning: PHP Startup: Unable to load dynamic library 'F:\PHP\ext \php_curl.dll' - The specified module could not be found. in Unknown on line 0 | TITLE:
PHP / cURL on Windows install: "The specified module could not be found."
QUESTION:
I'm running PHP 5.2.3 on Windows 2000 Server with IIS 5. I'm trying to get cURL working, so in my php.ini file, I have this line: extension_dir ="F:\PHP\ext" And later, I have: extension=php_curl.dll The file F:\PHP\ext\php_curl.dll exists, but when I try to run any PHP script, I get this in the error log: PHP Warning: PHP Startup: Unable to load dynamic library 'F:\PHP\ext \php_curl.dll' - The specified module could not be found. in Unknown on line 0
ANSWER:
Problem solved! Although the error message said The specified module could not be found, this is a little misleading -- it's not that it couldn't find php_curl.dll, but rather it couldn't find a module that php_curl.dll required. The 2 DLLs it requires are libeay32.dll and SSLeay32.dll. So, you have to put those 2 DLLs somewhere in your PATH (e.g., C:\Windows\system32 ). That's all there is to it. However, even that did not work for me initially. So I downloaded the Windows zip of the latest version of PHP, which includes all the necessary DLLs. I didn't reinstall PHP, I just copied all of the DLLs in the "ext" folder to my PHP extensions folder (as specified in the extension_dir variable in php.ini ), and I copied the versions of libeay32.dll and SSLeay32.dll from the PHP download into my System32 directory. I also did an iisreset, but I don't know if that was necessary. | [
"php",
"windows",
"curl"
] | 30 | 35 | 40,430 | 6 | 0 | 2008-08-15T15:17:30.073000 | 2008-08-15T15:18:53.063000 |
12,368 | 12,394 | How to dispose a class in .net? | The.NET garbage collector will eventually free up memory, but what if you want that memory back immediately? What code do you need to use in a class MyClass to call MyClass.Dispose() and free up all the used space by variables and objects in MyClass? | IDisposable has nothing to do with freeing memory. IDisposable is a pattern for freeing unmanaged resources -- and memory is quite definitely a managed resource. The links pointing to GC.Collect() are the correct answer, though use of this function is generally discouraged by the Microsoft.NET documentation. Edit: Having earned a substantial amount of karma for this answer, I feel a certain responsibility to elaborate on it, lest a newcomer to.NET resource management get the wrong impression. Inside a.NET process, there are two kinds of resource -- managed and unmanaged. "Managed" means that the runtime is in control of the resource, while "unmanaged" means that it's the programmer's responsibility. And there really is only one kind of managed resource that we care about in.NET today -- memory. The programmer tells the runtime to allocate memory and after that it's up to the runtime to figure out when the memory can freed. The mechanism that.NET uses for this purpose is called garbage collection and you can find plenty of information about GC on the internet simply by using Google. For the other kinds of resources,.NET doesn't know anything about cleaning them up so it has to rely on the programmer to do the right thing. To this end, the platform gives the programmer three tools: The IDisposable interface and the "using" statement in VB and C# Finalizers The IDisposable pattern as implemented by many BCL classes The first of these allows the programmer to efficiently acquire a resource, use it and then release it all within the same method. using (DisposableObject tmp = DisposableObject.AcquireResource()) { // Do something with tmp } // At this point, tmp.Dispose() will automatically have been called // BUT, tmp may still a perfectly valid object that still takes up memory If "AcquireResource" is a factory method that (for instance) opens a file and "Dispose" automatically closes the file, then this code cannot leak a file resource. But the memory for the "tmp" object itself may well still be allocated. That's because the IDisposable interface has absolutely no connection to the garbage collector. If you did want to ensure that the memory was freed, your only option would be to call GC.Collect() to force a garbage collection. However, it cannot be stressed enough that this is probably not a good idea. It's generally much better to let the garbage collector do what it was designed to do, which is to manage memory. What happens if the resource is being used for a longer period of time, such that its lifespan crosses several methods? Clearly, the "using" statement is no longer applicable, so the programmer would have to manually call "Dispose" when he or she is done with the resource. And what happens if the programmer forgets? If there's no fallback, then the process or computer may eventually run out of whichever resource isn't being properly freed. That's where finalizers come in. A finalizer is a method on your class that has a special relationship with the garbage collector. The GC promises that -- before freeing the memory for any object of that type -- it will first give the finalizer a chance to do some kind of cleanup. So in the case of a file, we theoretically don't need to close the file manually at all. We can just wait until the garbage collector gets to it and then let the finalizer do the work. Unfortunately, this doesn't work well in practice because the garbage collector runs non-deterministically. The file may stay open considerably longer than the programmer expects. And if enough files are kept open, the system may fail when trying to open an additional file. For most resources, we want both of these things. We want a convention to be able to say "we're done with this resource now" and we want to make sure that there's at least some chance for the cleanup to happen automatically if we forget to do it manually. That's where the "IDisposable" pattern comes into play. This is a convention that allows IDispose and a finalizer to play nicely together. You can see how the pattern works by looking at the official documentation for IDisposable. Bottom line: If what you really want to do is to just make sure that memory is freed, then IDisposable and finalizers will not help you. But the IDisposable interface is part of an extremely important pattern that all.NET programmers should understand. | How to dispose a class in .net? The.NET garbage collector will eventually free up memory, but what if you want that memory back immediately? What code do you need to use in a class MyClass to call MyClass.Dispose() and free up all the used space by variables and objects in MyClass? | TITLE:
How to dispose a class in .net?
QUESTION:
The.NET garbage collector will eventually free up memory, but what if you want that memory back immediately? What code do you need to use in a class MyClass to call MyClass.Dispose() and free up all the used space by variables and objects in MyClass?
ANSWER:
IDisposable has nothing to do with freeing memory. IDisposable is a pattern for freeing unmanaged resources -- and memory is quite definitely a managed resource. The links pointing to GC.Collect() are the correct answer, though use of this function is generally discouraged by the Microsoft.NET documentation. Edit: Having earned a substantial amount of karma for this answer, I feel a certain responsibility to elaborate on it, lest a newcomer to.NET resource management get the wrong impression. Inside a.NET process, there are two kinds of resource -- managed and unmanaged. "Managed" means that the runtime is in control of the resource, while "unmanaged" means that it's the programmer's responsibility. And there really is only one kind of managed resource that we care about in.NET today -- memory. The programmer tells the runtime to allocate memory and after that it's up to the runtime to figure out when the memory can freed. The mechanism that.NET uses for this purpose is called garbage collection and you can find plenty of information about GC on the internet simply by using Google. For the other kinds of resources,.NET doesn't know anything about cleaning them up so it has to rely on the programmer to do the right thing. To this end, the platform gives the programmer three tools: The IDisposable interface and the "using" statement in VB and C# Finalizers The IDisposable pattern as implemented by many BCL classes The first of these allows the programmer to efficiently acquire a resource, use it and then release it all within the same method. using (DisposableObject tmp = DisposableObject.AcquireResource()) { // Do something with tmp } // At this point, tmp.Dispose() will automatically have been called // BUT, tmp may still a perfectly valid object that still takes up memory If "AcquireResource" is a factory method that (for instance) opens a file and "Dispose" automatically closes the file, then this code cannot leak a file resource. But the memory for the "tmp" object itself may well still be allocated. That's because the IDisposable interface has absolutely no connection to the garbage collector. If you did want to ensure that the memory was freed, your only option would be to call GC.Collect() to force a garbage collection. However, it cannot be stressed enough that this is probably not a good idea. It's generally much better to let the garbage collector do what it was designed to do, which is to manage memory. What happens if the resource is being used for a longer period of time, such that its lifespan crosses several methods? Clearly, the "using" statement is no longer applicable, so the programmer would have to manually call "Dispose" when he or she is done with the resource. And what happens if the programmer forgets? If there's no fallback, then the process or computer may eventually run out of whichever resource isn't being properly freed. That's where finalizers come in. A finalizer is a method on your class that has a special relationship with the garbage collector. The GC promises that -- before freeing the memory for any object of that type -- it will first give the finalizer a chance to do some kind of cleanup. So in the case of a file, we theoretically don't need to close the file manually at all. We can just wait until the garbage collector gets to it and then let the finalizer do the work. Unfortunately, this doesn't work well in practice because the garbage collector runs non-deterministically. The file may stay open considerably longer than the programmer expects. And if enough files are kept open, the system may fail when trying to open an additional file. For most resources, we want both of these things. We want a convention to be able to say "we're done with this resource now" and we want to make sure that there's at least some chance for the cleanup to happen automatically if we forget to do it manually. That's where the "IDisposable" pattern comes into play. This is a convention that allows IDispose and a finalizer to play nicely together. You can see how the pattern works by looking at the official documentation for IDisposable. Bottom line: If what you really want to do is to just make sure that memory is freed, then IDisposable and finalizers will not help you. But the IDisposable interface is part of an extremely important pattern that all.NET programmers should understand. | [
".net",
"memory",
"dispose"
] | 45 | 103 | 90,313 | 20 | 0 | 2008-08-15T15:26:41.987000 | 2008-08-15T15:40:33.567000 |
12,374 | 25,204 | Has anyone had any success in unit testing SQL stored procedures? | We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off. But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users. What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler. We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process. So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures? The second part of my questions is whether unit testing would be/is easier with linq? I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all) | I ran into this same issue a while back and found that if I created a simple abstract base class for data access that allowed me to inject a connection and transaction, I could unit test my sprocs to see if they did the work in SQL that I asked them to do and then rollback so none of the test data is left in the db. This felt better than the usual "run a script to setup my test db, then after the tests run do a cleanup of the junk/test data". This also felt closer to unit testing because these tests could be run alone w/out having a great deal of "everything in the db needs to be 'just so' before I run these tests". Here is a snippet of the abstract base class used for data access Public MustInherit Class Repository(Of T As Class) Implements IRepository(Of T)
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString Private mConnection As IDbConnection Private mTransaction As IDbTransaction
Public Sub New() mConnection = Nothing mTransaction = Nothing End Sub
Public Sub New(ByVal connection As IDbConnection, ByVal transaction As IDbTransaction) mConnection = connection mTransaction = transaction End Sub
Public MustOverride Function BuildEntity(ByVal cmd As SqlCommand) As List(Of T)
Public Function ExecuteReader(ByVal Parameter As Parameter) As List(Of T) Implements IRepository(Of T).ExecuteReader Dim entityList As List(Of T) If Not mConnection Is Nothing Then Using cmd As SqlCommand = mConnection.CreateCommand() cmd.Transaction = mTransaction cmd.CommandType = Parameter.Type cmd.CommandText = Parameter.Text If Not Parameter.Items Is Nothing Then For Each param As SqlParameter In Parameter.Items cmd.Parameters.Add(param) Next End If entityList = BuildEntity(cmd) If Not entityList Is Nothing Then Return entityList End If End Using Else Using conn As SqlConnection = New SqlConnection(mConnectionString) Using cmd As SqlCommand = conn.CreateCommand() cmd.CommandType = Parameter.Type cmd.CommandText = Parameter.Text If Not Parameter.Items Is Nothing Then For Each param As SqlParameter In Parameter.Items cmd.Parameters.Add(param) Next End If conn.Open() entityList = BuildEntity(cmd) If Not entityList Is Nothing Then Return entityList End If End Using End Using End If
Return Nothing End Function End Class next you will see a sample data access class using the above base to get a list of products Public Class ProductRepository Inherits Repository(Of Product) Implements IProductRepository
Private mCache As IHttpCache
'This const is what you will use in your app Public Sub New(ByVal cache As IHttpCache) MyBase.New() mCache = cache End Sub
'This const is only used for testing so we can inject a connectin/transaction and have them roll'd back after the test Public Sub New(ByVal cache As IHttpCache, ByVal connection As IDbConnection, ByVal transaction As IDbTransaction) MyBase.New(connection, transaction) mCache = cache End Sub
Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductRepository.GetProducts Dim Parameter As New Parameter() Parameter.Type = CommandType.StoredProcedure Parameter.Text = "spGetProducts" Dim productList As List(Of Product) productList = MyBase.ExecuteReader(Parameter) Return productList End Function
'This function is used in each class that inherits from the base data access class so we can keep all the boring left-right mapping code in 1 place per object Public Overrides Function BuildEntity(ByVal cmd As System.Data.SqlClient.SqlCommand) As System.Collections.Generic.List(Of Product) Dim productList As New List(Of Product) Using reader As SqlDataReader = cmd.ExecuteReader() Dim product As Product While reader.Read() product = New Product() product.ID = reader("ProductID") product.SupplierID = reader("SupplierID") product.CategoryID = reader("CategoryID") product.ProductName = reader("ProductName") product.QuantityPerUnit = reader("QuantityPerUnit") product.UnitPrice = reader("UnitPrice") product.UnitsInStock = reader("UnitsInStock") product.UnitsOnOrder = reader("UnitsOnOrder") product.ReorderLevel = reader("ReorderLevel") productList.Add(product) End While If productList.Count > 0 Then Return productList End If End Using Return Nothing End Function End Class And now in your unit test you can also inherit from a very simple base class that does your setup / rollback work - or keep this on a per unit test basis below is the simple testing base class I used Imports System.Configuration Imports System.Data Imports System.Data.SqlClient Imports Microsoft.VisualStudio.TestTools.UnitTesting
Public MustInherit Class TransactionFixture Protected mConnection As IDbConnection Protected mTransaction As IDbTransaction Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString _ Public Sub CreateConnectionAndBeginTran() mConnection = New SqlConnection(mConnectionString) mConnection.Open() mTransaction = mConnection.BeginTransaction() End Sub _ Public Sub RollbackTranAndCloseConnection() mTransaction.Rollback() mTransaction.Dispose() mConnection.Close() mConnection.Dispose() End Sub End Class and finally - the below is a simple test using that test base class that shows how to test the entire CRUD cycle to make sure all the sprocs do their job and that your ado.net code does the left-right mapping correctly I know this doesn't test the "spGetProducts" sproc used in the above data access sample, but you should see the power behind this approach to unit testing sprocs Imports SampleApplication.Library Imports System.Collections.Generic Imports Microsoft.VisualStudio.TestTools.UnitTesting _ Public Class ProductRepositoryUnitTest Inherits TransactionFixture
Private mRepository As ProductRepository _ Public Sub Should-Insert-Update-And-Delete-Product() mRepository = New ProductRepository(New HttpCache(), mConnection, mTransaction) '** Create a test product to manipulate throughout **' Dim Product As New Product() Product.ProductName = "TestProduct" Product.SupplierID = 1 Product.CategoryID = 2 Product.QuantityPerUnit = "10 boxes of stuff" Product.UnitPrice = 14.95 Product.UnitsInStock = 22 Product.UnitsOnOrder = 19 Product.ReorderLevel = 12 '** Insert the new product object into SQL using your insert sproc **' mRepository.InsertProduct(Product) '** Select the product object that was just inserted and verify it does exist **' '** Using your GetProductById sproc **' Dim Product2 As Product = mRepository.GetProduct(Product.ID) Assert.AreEqual("TestProduct", Product2.ProductName) Assert.AreEqual(1, Product2.SupplierID) Assert.AreEqual(2, Product2.CategoryID) Assert.AreEqual("10 boxes of stuff", Product2.QuantityPerUnit) Assert.AreEqual(14.95, Product2.UnitPrice) Assert.AreEqual(22, Product2.UnitsInStock) Assert.AreEqual(19, Product2.UnitsOnOrder) Assert.AreEqual(12, Product2.ReorderLevel) '** Update the product object **' Product2.ProductName = "UpdatedTestProduct" Product2.SupplierID = 2 Product2.CategoryID = 1 Product2.QuantityPerUnit = "a box of stuff" Product2.UnitPrice = 16.95 Product2.UnitsInStock = 10 Product2.UnitsOnOrder = 20 Product2.ReorderLevel = 8 mRepository.UpdateProduct(Product2) '**using your update sproc '** Select the product object that was just updated to verify it completed **' Dim Product3 As Product = mRepository.GetProduct(Product2.ID) Assert.AreEqual("UpdatedTestProduct", Product2.ProductName) Assert.AreEqual(2, Product2.SupplierID) Assert.AreEqual(1, Product2.CategoryID) Assert.AreEqual("a box of stuff", Product2.QuantityPerUnit) Assert.AreEqual(16.95, Product2.UnitPrice) Assert.AreEqual(10, Product2.UnitsInStock) Assert.AreEqual(20, Product2.UnitsOnOrder) Assert.AreEqual(8, Product2.ReorderLevel) '** Delete the product and verify it does not exist **' mRepository.DeleteProduct(Product3.ID) '** The above will use your delete product by id sproc **' Dim Product4 As Product = mRepository.GetProduct(Product3.ID) Assert.AreEqual(Nothing, Product4) End Sub
End Class I know this is a long example, but it helped to have a reusable class for the data access work, and yet another reusable class for my testing so I didn't have to do the setup/teardown work over and over again;) | Has anyone had any success in unit testing SQL stored procedures? We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off. But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users. What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler. We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process. So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures? The second part of my questions is whether unit testing would be/is easier with linq? I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all) | TITLE:
Has anyone had any success in unit testing SQL stored procedures?
QUESTION:
We’ve found that the unit tests we’ve written for our C#/C++ code have really paid off. But we still have thousands of lines of business logic in stored procedures, which only really get tested in anger when our product is rolled out to a large number of users. What makes this worse is that some of these stored procedures end up being very long, because of the performance hit when passing temporary tables between SPs. This has prevented us from refactoring to make the code simpler. We have made several attempts at building unit tests around some of our key stored procedures (primarily testing the performance), but have found that setting up the test data for these tests is really hard. For example, we end up copying around test databases. In addition to this, the tests end up being really sensitive to change, and even the smallest change to a stored proc. or table requires a large amount of changes to the tests. So after many builds breaking due to these database tests failing intermittently, we’ve just had to pull them out of the build process. So, the main part of my questions is: has anyone ever successfully written unit tests for their stored procedures? The second part of my questions is whether unit testing would be/is easier with linq? I was thinking that rather than having to set up tables of test data, you could simply create a collection of test objects, and test your linq code in a “linq to objects” situation? (I am a totally new to linq so don’t know if this would even work at all)
ANSWER:
I ran into this same issue a while back and found that if I created a simple abstract base class for data access that allowed me to inject a connection and transaction, I could unit test my sprocs to see if they did the work in SQL that I asked them to do and then rollback so none of the test data is left in the db. This felt better than the usual "run a script to setup my test db, then after the tests run do a cleanup of the junk/test data". This also felt closer to unit testing because these tests could be run alone w/out having a great deal of "everything in the db needs to be 'just so' before I run these tests". Here is a snippet of the abstract base class used for data access Public MustInherit Class Repository(Of T As Class) Implements IRepository(Of T)
Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString Private mConnection As IDbConnection Private mTransaction As IDbTransaction
Public Sub New() mConnection = Nothing mTransaction = Nothing End Sub
Public Sub New(ByVal connection As IDbConnection, ByVal transaction As IDbTransaction) mConnection = connection mTransaction = transaction End Sub
Public MustOverride Function BuildEntity(ByVal cmd As SqlCommand) As List(Of T)
Public Function ExecuteReader(ByVal Parameter As Parameter) As List(Of T) Implements IRepository(Of T).ExecuteReader Dim entityList As List(Of T) If Not mConnection Is Nothing Then Using cmd As SqlCommand = mConnection.CreateCommand() cmd.Transaction = mTransaction cmd.CommandType = Parameter.Type cmd.CommandText = Parameter.Text If Not Parameter.Items Is Nothing Then For Each param As SqlParameter In Parameter.Items cmd.Parameters.Add(param) Next End If entityList = BuildEntity(cmd) If Not entityList Is Nothing Then Return entityList End If End Using Else Using conn As SqlConnection = New SqlConnection(mConnectionString) Using cmd As SqlCommand = conn.CreateCommand() cmd.CommandType = Parameter.Type cmd.CommandText = Parameter.Text If Not Parameter.Items Is Nothing Then For Each param As SqlParameter In Parameter.Items cmd.Parameters.Add(param) Next End If conn.Open() entityList = BuildEntity(cmd) If Not entityList Is Nothing Then Return entityList End If End Using End Using End If
Return Nothing End Function End Class next you will see a sample data access class using the above base to get a list of products Public Class ProductRepository Inherits Repository(Of Product) Implements IProductRepository
Private mCache As IHttpCache
'This const is what you will use in your app Public Sub New(ByVal cache As IHttpCache) MyBase.New() mCache = cache End Sub
'This const is only used for testing so we can inject a connectin/transaction and have them roll'd back after the test Public Sub New(ByVal cache As IHttpCache, ByVal connection As IDbConnection, ByVal transaction As IDbTransaction) MyBase.New(connection, transaction) mCache = cache End Sub
Public Function GetProducts() As System.Collections.Generic.List(Of Product) Implements IProductRepository.GetProducts Dim Parameter As New Parameter() Parameter.Type = CommandType.StoredProcedure Parameter.Text = "spGetProducts" Dim productList As List(Of Product) productList = MyBase.ExecuteReader(Parameter) Return productList End Function
'This function is used in each class that inherits from the base data access class so we can keep all the boring left-right mapping code in 1 place per object Public Overrides Function BuildEntity(ByVal cmd As System.Data.SqlClient.SqlCommand) As System.Collections.Generic.List(Of Product) Dim productList As New List(Of Product) Using reader As SqlDataReader = cmd.ExecuteReader() Dim product As Product While reader.Read() product = New Product() product.ID = reader("ProductID") product.SupplierID = reader("SupplierID") product.CategoryID = reader("CategoryID") product.ProductName = reader("ProductName") product.QuantityPerUnit = reader("QuantityPerUnit") product.UnitPrice = reader("UnitPrice") product.UnitsInStock = reader("UnitsInStock") product.UnitsOnOrder = reader("UnitsOnOrder") product.ReorderLevel = reader("ReorderLevel") productList.Add(product) End While If productList.Count > 0 Then Return productList End If End Using Return Nothing End Function End Class And now in your unit test you can also inherit from a very simple base class that does your setup / rollback work - or keep this on a per unit test basis below is the simple testing base class I used Imports System.Configuration Imports System.Data Imports System.Data.SqlClient Imports Microsoft.VisualStudio.TestTools.UnitTesting
Public MustInherit Class TransactionFixture Protected mConnection As IDbConnection Protected mTransaction As IDbTransaction Private mConnectionString As String = ConfigurationManager.ConnectionStrings("Northwind.ConnectionString").ConnectionString _ Public Sub CreateConnectionAndBeginTran() mConnection = New SqlConnection(mConnectionString) mConnection.Open() mTransaction = mConnection.BeginTransaction() End Sub _ Public Sub RollbackTranAndCloseConnection() mTransaction.Rollback() mTransaction.Dispose() mConnection.Close() mConnection.Dispose() End Sub End Class and finally - the below is a simple test using that test base class that shows how to test the entire CRUD cycle to make sure all the sprocs do their job and that your ado.net code does the left-right mapping correctly I know this doesn't test the "spGetProducts" sproc used in the above data access sample, but you should see the power behind this approach to unit testing sprocs Imports SampleApplication.Library Imports System.Collections.Generic Imports Microsoft.VisualStudio.TestTools.UnitTesting _ Public Class ProductRepositoryUnitTest Inherits TransactionFixture
Private mRepository As ProductRepository _ Public Sub Should-Insert-Update-And-Delete-Product() mRepository = New ProductRepository(New HttpCache(), mConnection, mTransaction) '** Create a test product to manipulate throughout **' Dim Product As New Product() Product.ProductName = "TestProduct" Product.SupplierID = 1 Product.CategoryID = 2 Product.QuantityPerUnit = "10 boxes of stuff" Product.UnitPrice = 14.95 Product.UnitsInStock = 22 Product.UnitsOnOrder = 19 Product.ReorderLevel = 12 '** Insert the new product object into SQL using your insert sproc **' mRepository.InsertProduct(Product) '** Select the product object that was just inserted and verify it does exist **' '** Using your GetProductById sproc **' Dim Product2 As Product = mRepository.GetProduct(Product.ID) Assert.AreEqual("TestProduct", Product2.ProductName) Assert.AreEqual(1, Product2.SupplierID) Assert.AreEqual(2, Product2.CategoryID) Assert.AreEqual("10 boxes of stuff", Product2.QuantityPerUnit) Assert.AreEqual(14.95, Product2.UnitPrice) Assert.AreEqual(22, Product2.UnitsInStock) Assert.AreEqual(19, Product2.UnitsOnOrder) Assert.AreEqual(12, Product2.ReorderLevel) '** Update the product object **' Product2.ProductName = "UpdatedTestProduct" Product2.SupplierID = 2 Product2.CategoryID = 1 Product2.QuantityPerUnit = "a box of stuff" Product2.UnitPrice = 16.95 Product2.UnitsInStock = 10 Product2.UnitsOnOrder = 20 Product2.ReorderLevel = 8 mRepository.UpdateProduct(Product2) '**using your update sproc '** Select the product object that was just updated to verify it completed **' Dim Product3 As Product = mRepository.GetProduct(Product2.ID) Assert.AreEqual("UpdatedTestProduct", Product2.ProductName) Assert.AreEqual(2, Product2.SupplierID) Assert.AreEqual(1, Product2.CategoryID) Assert.AreEqual("a box of stuff", Product2.QuantityPerUnit) Assert.AreEqual(16.95, Product2.UnitPrice) Assert.AreEqual(10, Product2.UnitsInStock) Assert.AreEqual(20, Product2.UnitsOnOrder) Assert.AreEqual(8, Product2.ReorderLevel) '** Delete the product and verify it does not exist **' mRepository.DeleteProduct(Product3.ID) '** The above will use your delete product by id sproc **' Dim Product4 As Product = mRepository.GetProduct(Product3.ID) Assert.AreEqual(Nothing, Product4) End Sub
End Class I know this is a long example, but it helped to have a reusable class for the data access work, and yet another reusable class for my testing so I didn't have to do the setup/teardown work over and over again;) | [
"sql",
"unit-testing",
"linq-to-sql"
] | 37 | 12 | 7,437 | 16 | 0 | 2008-08-15T15:29:14.837000 | 2008-08-24T17:20:14.487000 |
12,385 | 12,436 | How to attach a ChangeEvent handler to an inherited dependency property? | How would you attach a propertychanged callback to a property that is inherited? Like such: class A { DependencyProperty prop; }
class B: A { //... prop.AddListener(PropertyChangeCallback); } | (edited to remove recommendation to use DependencyPropertyDescriptor, which is not available in Silverlight) PropertyDescriptor AddValueChanged Alternative | How to attach a ChangeEvent handler to an inherited dependency property? How would you attach a propertychanged callback to a property that is inherited? Like such: class A { DependencyProperty prop; }
class B: A { //... prop.AddListener(PropertyChangeCallback); } | TITLE:
How to attach a ChangeEvent handler to an inherited dependency property?
QUESTION:
How would you attach a propertychanged callback to a property that is inherited? Like such: class A { DependencyProperty prop; }
class B: A { //... prop.AddListener(PropertyChangeCallback); }
ANSWER:
(edited to remove recommendation to use DependencyPropertyDescriptor, which is not available in Silverlight) PropertyDescriptor AddValueChanged Alternative | [
".net",
"silverlight",
"dependency-properties"
] | 2 | 4 | 2,473 | 3 | 0 | 2008-08-15T15:34:20.367000 | 2008-08-15T16:14:18.913000 |
12,397 | 19,051 | .NET VirtualPathProviders and Pre-Compilation | We've been working on an application that quite heavily relies on VirtualPathProviders in ASP.NET. We've just come to put the thing on a live server to demonstrate it and it appears that the VirtualPathProviders simply don't work when the site is pre-compiled!! I've been looking at the workaround which has been posted here: http://sunali.com/2008/01/09/virtualpathprovider-in-precompiled-web-sites/, but so far I haven't been able to get that to work, either! (Well - it works fine in visual studio's web development server - just not on our IIS box - again!). Does anybody here have any more information on the problem? Is it fixed in.NET v3.5 (we're currently building for v2.0)? | Unfortunately that is not officially supported. See the following MSDN article. If a Web site is precompiled for deployment, content provided by a VirtualPathProvider instance is not compiled, and no VirtualPathProvider instances are used by the precompiled site. The site you referred to is an unofficial workaround. I don't think it's been fixed in.NET 3.5 SP1 | .NET VirtualPathProviders and Pre-Compilation We've been working on an application that quite heavily relies on VirtualPathProviders in ASP.NET. We've just come to put the thing on a live server to demonstrate it and it appears that the VirtualPathProviders simply don't work when the site is pre-compiled!! I've been looking at the workaround which has been posted here: http://sunali.com/2008/01/09/virtualpathprovider-in-precompiled-web-sites/, but so far I haven't been able to get that to work, either! (Well - it works fine in visual studio's web development server - just not on our IIS box - again!). Does anybody here have any more information on the problem? Is it fixed in.NET v3.5 (we're currently building for v2.0)? | TITLE:
.NET VirtualPathProviders and Pre-Compilation
QUESTION:
We've been working on an application that quite heavily relies on VirtualPathProviders in ASP.NET. We've just come to put the thing on a live server to demonstrate it and it appears that the VirtualPathProviders simply don't work when the site is pre-compiled!! I've been looking at the workaround which has been posted here: http://sunali.com/2008/01/09/virtualpathprovider-in-precompiled-web-sites/, but so far I haven't been able to get that to work, either! (Well - it works fine in visual studio's web development server - just not on our IIS box - again!). Does anybody here have any more information on the problem? Is it fixed in.NET v3.5 (we're currently building for v2.0)?
ANSWER:
Unfortunately that is not officially supported. See the following MSDN article. If a Web site is precompiled for deployment, content provided by a VirtualPathProvider instance is not compiled, and no VirtualPathProvider instances are used by the precompiled site. The site you referred to is an unofficial workaround. I don't think it's been fixed in.NET 3.5 SP1 | [
"asp.net",
"virtualpathprovider"
] | 7 | 4 | 2,563 | 1 | 0 | 2008-08-15T15:41:53.160000 | 2008-08-20T23:03:37.457000 |
12,401 | 3,003,400 | FOSS ASP.Net Session Replication Solution? | I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations. Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though. Memcached - Little replication/failover support without going to a db backend. Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial. Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects. I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world. Are there any suitable alternatives available on the.Net world? | BTW Windows Server AppFabric is out of beta. That's what i mentioned in my previous post. here is the link on general availability;- http://blogs.technet.com/b/appfabric/archive/2010/06/07/windows-server-appfabric-now-generally-available.aspx which specific features do you think one can get on NCache and not on AppFabric? | FOSS ASP.Net Session Replication Solution? I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations. Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though. Memcached - Little replication/failover support without going to a db backend. Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial. Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects. I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world. Are there any suitable alternatives available on the.Net world? | TITLE:
FOSS ASP.Net Session Replication Solution?
QUESTION:
I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations. Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though. Memcached - Little replication/failover support without going to a db backend. Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial. Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects. I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world. Are there any suitable alternatives available on the.Net world?
ANSWER:
BTW Windows Server AppFabric is out of beta. That's what i mentioned in my previous post. here is the link on general availability;- http://blogs.technet.com/b/appfabric/archive/2010/06/07/windows-server-appfabric-now-generally-available.aspx which specific features do you think one can get on NCache and not on AppFabric? | [
"asp.net",
"session",
"cluster-computing",
"failover",
"session-replication"
] | 6 | 1 | 1,351 | 4 | 0 | 2008-08-15T15:45:21.217000 | 2010-06-09T06:00:07.663000 |
12,406 | 12,424 | Is it possible to slipstream the Visual Studio 2008 SP1 install? | From what I've read, VS 2008 SP1 and Team Foundation Server SP1 packages are traditional service packs that require you to first install the original versions before you will be able to install the SP. Is there a way, supported or not, to slipstream the install? | Here's an MSDN forum post in which an MSFTie indicates it will be possible and that details are forthcoming. Another poster is relaying results of her almost-successful attempt. Looks like this will be doable soon. Related: how to slipstream Team Foundation Server 2008 SP1 (TFS 2008 SP1) | Is it possible to slipstream the Visual Studio 2008 SP1 install? From what I've read, VS 2008 SP1 and Team Foundation Server SP1 packages are traditional service packs that require you to first install the original versions before you will be able to install the SP. Is there a way, supported or not, to slipstream the install? | TITLE:
Is it possible to slipstream the Visual Studio 2008 SP1 install?
QUESTION:
From what I've read, VS 2008 SP1 and Team Foundation Server SP1 packages are traditional service packs that require you to first install the original versions before you will be able to install the SP. Is there a way, supported or not, to slipstream the install?
ANSWER:
Here's an MSDN forum post in which an MSFTie indicates it will be possible and that details are forthcoming. Another poster is relaying results of her almost-successful attempt. Looks like this will be doable soon. Related: how to slipstream Team Foundation Server 2008 SP1 (TFS 2008 SP1) | [
"visual-studio-2008",
"visual-studio-2008-sp1"
] | 9 | 3 | 8,227 | 4 | 0 | 2008-08-15T15:48:58.527000 | 2008-08-15T16:06:39.427000 |
12,476 | 12,484 | Why is my asp.net application throwing ThreadAbortException? | This is a self-explanatory question: Why does this thing bubble into my try catch's even when nothing is wrong? Why is it showing up in my log, hundreds of times? I know its a newb question, but if this site is gonna get search ranking and draw in newbs we have to ask them | This is probably coming from a Response.Redirect call. Check this link for an explanation: http://dotnet.org.za/armand/archive/2004/11/16/7088.aspx (In most cases, calling Response.Redirect(url, false) fixes the problem) | Why is my asp.net application throwing ThreadAbortException? This is a self-explanatory question: Why does this thing bubble into my try catch's even when nothing is wrong? Why is it showing up in my log, hundreds of times? I know its a newb question, but if this site is gonna get search ranking and draw in newbs we have to ask them | TITLE:
Why is my asp.net application throwing ThreadAbortException?
QUESTION:
This is a self-explanatory question: Why does this thing bubble into my try catch's even when nothing is wrong? Why is it showing up in my log, hundreds of times? I know its a newb question, but if this site is gonna get search ranking and draw in newbs we have to ask them
ANSWER:
This is probably coming from a Response.Redirect call. Check this link for an explanation: http://dotnet.org.za/armand/archive/2004/11/16/7088.aspx (In most cases, calling Response.Redirect(url, false) fixes the problem) | [
"asp.net",
"multithreading"
] | 24 | 20 | 12,972 | 5 | 0 | 2008-08-15T17:02:44.867000 | 2008-08-15T17:11:32.213000 |
12,482 | 462,529 | How can you publish a ClickOnce application through CruiseControl.NET? | I have CruiseControl.NET Version 1.4 set up on my development server. Whenever a developer checks in code, it makes a compile. Now we're at a place where we can start giving our application to the testers. We'd like to use ClickOnce to distribute the application, with the idea being that when a tester goes to test the application, they have the latest build. I can't find a way to make that happen with CruiseControl.NET. We're using MSBUILD to perform the builds. | Thanks for all the help. The final solution we implemented took a bit from every answer. We found it easier to handle working with multiple environments using simple batch files. I'm not suggesting this is the best way to do this, but for our given scenario and requirements, this worked well. Supplement "Project" with your project name and "Environment" with your environment name (dev, test, stage, production, whatever). Here is the tasks area of our "ccnet.config" file. F:\Source\Project\Environment\CruiseControl\CopySettings.bat C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe F:\Source\Project\Environment\ Project.sln /noconsolelogger /p:Configuration=Debug /v:diag Rebuild 0 ThoughtWorks.CruiseControl.MsBuild.XmlLogger,ThoughtWorks.CruiseControl.MsBuild.dll F:\Source\Project\Environment\CruiseControl\Publish.bat The first thing you will notice is that CopySettings.bat runs. This copies specific settings for the environment, such as database connections. Next, the standard MSBUILD task runs. Any compile errors are caught here and handled as normal. The last thing to execute is Publish.bat. This actually performs a MSBUILD "rebuild" again from command line, and parameters from CruiseControl are automatically passed in and built. Next, MSBUILD is called for the "publish" target. The exact same parameters are given to the publish as the rebuild was issued. This keeps the build numbers in sync. Also, our executables are named differently (i.e. - ProjectDev and ProjectTest). We end up with different version numbers and names, and this allows ClickOnce to do its thing. The last part of Publish.bat copies the actual files to their new homes. We don't use the publish.htm as all our users are on the network, we just give them a shortcut to the manifest file on their desktop and they can click and always be running the correct executable with a version number that ties out in CruiseControl. Here is CopySettings.bat XCOPY "F:\Source\Project\Environment\CruiseControl\Project\app.config" "F:\Source\Project\Environment\Project" /Y /I /R XCOPY "F:\Source\Project\Environment\CruiseControl\Project\My Project\Settings.Designer.vb" "F:\Source\Project\Environment\Project\My Project" /Y /I /R XCOPY "F:\Source\Project\Environment\CruiseControl\Project\My Project\Settings.settings" "F:\Source\Project\Environment\Project\My Project" /Y /I /R And lastly, here is Publish.bat C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe /target:rebuild "F:\Source\Project\Environment\Project\Project.vbproj" /property:ApplicationRevision=%CCNetLabel% /property:AssemblyName="ProjectEnvironment" /property:PublishUrl="\\Server\bin\Project\Environment\\" C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe /target:publish "F:\Source\Project\Environment\Project\Project.vbproj" /property:ApplicationVersion="1.0.0.%CCNetLabel%" /property:AssemblyVersion="1.0.0.%CCNetLabel%" /property:AssemblyName="ProjectEnvironment"
XCOPY "F:\Source\Project\Environment\Project\bin\Debug\app.publish" "F:\Binary\Project\Environment" /Y /I XCOPY "F:\Source\Project\Environment\Project\bin\Debug\app.publish\Application Files" "F:\Binary\Project\Environment\Application Files" /Y /I /S Like I said, it's probably not done the way that CruiseControl and MSBUILD developers had intended things to work, but it does work. If you need to get this working yesterday, it might be the solution you're looking for. Good luck! | How can you publish a ClickOnce application through CruiseControl.NET? I have CruiseControl.NET Version 1.4 set up on my development server. Whenever a developer checks in code, it makes a compile. Now we're at a place where we can start giving our application to the testers. We'd like to use ClickOnce to distribute the application, with the idea being that when a tester goes to test the application, they have the latest build. I can't find a way to make that happen with CruiseControl.NET. We're using MSBUILD to perform the builds. | TITLE:
How can you publish a ClickOnce application through CruiseControl.NET?
QUESTION:
I have CruiseControl.NET Version 1.4 set up on my development server. Whenever a developer checks in code, it makes a compile. Now we're at a place where we can start giving our application to the testers. We'd like to use ClickOnce to distribute the application, with the idea being that when a tester goes to test the application, they have the latest build. I can't find a way to make that happen with CruiseControl.NET. We're using MSBUILD to perform the builds.
ANSWER:
Thanks for all the help. The final solution we implemented took a bit from every answer. We found it easier to handle working with multiple environments using simple batch files. I'm not suggesting this is the best way to do this, but for our given scenario and requirements, this worked well. Supplement "Project" with your project name and "Environment" with your environment name (dev, test, stage, production, whatever). Here is the tasks area of our "ccnet.config" file. F:\Source\Project\Environment\CruiseControl\CopySettings.bat C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe F:\Source\Project\Environment\ Project.sln /noconsolelogger /p:Configuration=Debug /v:diag Rebuild 0 ThoughtWorks.CruiseControl.MsBuild.XmlLogger,ThoughtWorks.CruiseControl.MsBuild.dll F:\Source\Project\Environment\CruiseControl\Publish.bat The first thing you will notice is that CopySettings.bat runs. This copies specific settings for the environment, such as database connections. Next, the standard MSBUILD task runs. Any compile errors are caught here and handled as normal. The last thing to execute is Publish.bat. This actually performs a MSBUILD "rebuild" again from command line, and parameters from CruiseControl are automatically passed in and built. Next, MSBUILD is called for the "publish" target. The exact same parameters are given to the publish as the rebuild was issued. This keeps the build numbers in sync. Also, our executables are named differently (i.e. - ProjectDev and ProjectTest). We end up with different version numbers and names, and this allows ClickOnce to do its thing. The last part of Publish.bat copies the actual files to their new homes. We don't use the publish.htm as all our users are on the network, we just give them a shortcut to the manifest file on their desktop and they can click and always be running the correct executable with a version number that ties out in CruiseControl. Here is CopySettings.bat XCOPY "F:\Source\Project\Environment\CruiseControl\Project\app.config" "F:\Source\Project\Environment\Project" /Y /I /R XCOPY "F:\Source\Project\Environment\CruiseControl\Project\My Project\Settings.Designer.vb" "F:\Source\Project\Environment\Project\My Project" /Y /I /R XCOPY "F:\Source\Project\Environment\CruiseControl\Project\My Project\Settings.settings" "F:\Source\Project\Environment\Project\My Project" /Y /I /R And lastly, here is Publish.bat C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe /target:rebuild "F:\Source\Project\Environment\Project\Project.vbproj" /property:ApplicationRevision=%CCNetLabel% /property:AssemblyName="ProjectEnvironment" /property:PublishUrl="\\Server\bin\Project\Environment\\" C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe /target:publish "F:\Source\Project\Environment\Project\Project.vbproj" /property:ApplicationVersion="1.0.0.%CCNetLabel%" /property:AssemblyVersion="1.0.0.%CCNetLabel%" /property:AssemblyName="ProjectEnvironment"
XCOPY "F:\Source\Project\Environment\Project\bin\Debug\app.publish" "F:\Binary\Project\Environment" /Y /I XCOPY "F:\Source\Project\Environment\Project\bin\Debug\app.publish\Application Files" "F:\Binary\Project\Environment\Application Files" /Y /I /S Like I said, it's probably not done the way that CruiseControl and MSBUILD developers had intended things to work, but it does work. If you need to get this working yesterday, it might be the solution you're looking for. Good luck! | [
"msbuild",
"clickonce",
"cruisecontrol.net",
"publish"
] | 24 | 12 | 13,086 | 5 | 0 | 2008-08-15T17:08:19.583000 | 2009-01-20T18:37:54.870000 |
12,492 | 12,534 | Pretty printing XML files on Emacs | I use emacs to edit my xml files (nxml-mode) and the files were generated by machine don't have any pretty formatting of the tags. I have searched for pretty printing the entire file with indentation and saving it, but wasn't able to find an automatic way. Is there a way? Or atleast some editor on linux which can do it. | I use nXML mode for editing and Tidy when I want to format and indent XML or HTML. There is also an Emacs interface to Tidy. | Pretty printing XML files on Emacs I use emacs to edit my xml files (nxml-mode) and the files were generated by machine don't have any pretty formatting of the tags. I have searched for pretty printing the entire file with indentation and saving it, but wasn't able to find an automatic way. Is there a way? Or atleast some editor on linux which can do it. | TITLE:
Pretty printing XML files on Emacs
QUESTION:
I use emacs to edit my xml files (nxml-mode) and the files were generated by machine don't have any pretty formatting of the tags. I have searched for pretty printing the entire file with indentation and saving it, but wasn't able to find an automatic way. Is there a way? Or atleast some editor on linux which can do it.
ANSWER:
I use nXML mode for editing and Tidy when I want to format and indent XML or HTML. There is also an Emacs interface to Tidy. | [
"xml",
"emacs",
"editor"
] | 90 | 25 | 45,470 | 15 | 0 | 2008-08-15T17:17:14.823000 | 2008-08-15T17:47:29.893000 |
12,509 | 12,586 | Why Are People Still Creating RSS Feeds? | ...instead of using the Atom syndication format? Atom is a well-defined, general-purpose XML syndication format. RSS is fractured into four different versions. All the major feed readers have supported Atom for as long as I can remember, so why isn't its use more prevalent? Worst of all are sites that provide feeds in both formats - what's the point?! UPDATE (18 August): Interestingly, this site itself is using Atom for its feeds rather than RSS. | The fundamental thing that the Atom creators didn't understand (and that the Atom supporters still don't understand), is that Atom isn't somehow separate from RSS. There's this idea that RSS fractured, and that somehow Atom fixes that problem. But it doesn't. Atom is just another RSS splinter. A new name doesn't change the fact that it's just one more standard competing to do the same job, a job for which any of the competing standards are sufficient. No one outside a fairly small group of people care at all which standard is used. They just want it to work. Atom, RSS 2.0, RSS 1.0, RSS 401(k), whatever. As long as it works, the users are happy. The RSS "brand" very much defines the entire feed category, though, so on the rare occasion that someone does know enough to choose, they will tend to choose RSS, because it's got "the name." They will also tend to choose RSS 2.0, because it's got the bigger number. RSS, and especially RSS 2.0, are very much entrenched in the feed "industry." Atom hasn't taken off because it doesn't bring much except a new name. Why switch away from RSS when it works just fine? And why even bother using Atom on new projects if RSS is sufficient? Switching to a new feed format mostly means extra time spent learning the new format. If nothing else Apple's exclusive use of RSS 2.0 for podcasts means that RSS 2.0 is here for the foreseeable future. | Why Are People Still Creating RSS Feeds? ...instead of using the Atom syndication format? Atom is a well-defined, general-purpose XML syndication format. RSS is fractured into four different versions. All the major feed readers have supported Atom for as long as I can remember, so why isn't its use more prevalent? Worst of all are sites that provide feeds in both formats - what's the point?! UPDATE (18 August): Interestingly, this site itself is using Atom for its feeds rather than RSS. | TITLE:
Why Are People Still Creating RSS Feeds?
QUESTION:
...instead of using the Atom syndication format? Atom is a well-defined, general-purpose XML syndication format. RSS is fractured into four different versions. All the major feed readers have supported Atom for as long as I can remember, so why isn't its use more prevalent? Worst of all are sites that provide feeds in both formats - what's the point?! UPDATE (18 August): Interestingly, this site itself is using Atom for its feeds rather than RSS.
ANSWER:
The fundamental thing that the Atom creators didn't understand (and that the Atom supporters still don't understand), is that Atom isn't somehow separate from RSS. There's this idea that RSS fractured, and that somehow Atom fixes that problem. But it doesn't. Atom is just another RSS splinter. A new name doesn't change the fact that it's just one more standard competing to do the same job, a job for which any of the competing standards are sufficient. No one outside a fairly small group of people care at all which standard is used. They just want it to work. Atom, RSS 2.0, RSS 1.0, RSS 401(k), whatever. As long as it works, the users are happy. The RSS "brand" very much defines the entire feed category, though, so on the rare occasion that someone does know enough to choose, they will tend to choose RSS, because it's got "the name." They will also tend to choose RSS 2.0, because it's got the bigger number. RSS, and especially RSS 2.0, are very much entrenched in the feed "industry." Atom hasn't taken off because it doesn't bring much except a new name. Why switch away from RSS when it works just fine? And why even bother using Atom on new projects if RSS is sufficient? Switching to a new feed format mostly means extra time spent learning the new format. If nothing else Apple's exclusive use of RSS 2.0 for podcasts means that RSS 2.0 is here for the foreseeable future. | [
"xml",
"rss",
"atom-feed"
] | 40 | 111 | 6,083 | 12 | 0 | 2008-08-15T17:27:52.213000 | 2008-08-15T18:36:55.007000 |
12,516 | 12,861 | Expression Evaluation and Tree Walking using polymorphism? (ala Steve Yegge) | This morning, I was reading Steve Yegge's: When Polymorphism Fails, when I came across a question that a co-worker of his used to ask potential employees when they came for their interview at Amazon. As an example of polymorphism in action, let's look at the classic "eval" interview question, which (as far as I know) was brought to Amazon by Ron Braunstein. The question is quite a rich one, as it manages to probe a wide variety of important skills: OOP design, recursion, binary trees, polymorphism and runtime typing, general coding skills, and (if you want to make it extra hard) parsing theory. At some point, the candidate hopefully realizes that you can represent an arithmetic expression as a binary tree, assuming you're only using binary operators such as "+", "-", "*", "/". The leaf nodes are all numbers, and the internal nodes are all operators. Evaluating the expression means walking the tree. If the candidate doesn't realize this, you can gently lead them to it, or if necessary, just tell them. Even if you tell them, it's still an interesting problem. The first half of the question, which some people (whose names I will protect to my dying breath, but their initials are Willie Lewis) feel is a Job Requirement If You Want To Call Yourself A Developer And Work At Amazon, is actually kinda hard. The question is: how do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree. We may have an ADJ challenge on this question at some point. The second half is: let's say this is a 2-person project, and your partner, who we'll call "Willie", is responsible for transforming the string expression into a tree. You get the easy part: you need to decide what classes Willie is to construct the tree with. You can do it in any language, but make sure you pick one, or Willie will hand you assembly language. If he's feeling ornery, it will be for a processor that is no longer manufactured in production. You'd be amazed at how many candidates boff this one. I won't give away the answer, but a Standard Bad Solution involves the use of a switch or case statment (or just good old-fashioned cascaded-ifs). A Slightly Better Solution involves using a table of function pointers, and the Probably Best Solution involves using polymorphism. I encourage you to work through it sometime. Fun stuff! So, let's try to tackle the problem all three ways. How do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree using cascaded-if's, a table of function pointers, and/or polymorphism? Feel free to tackle one, two, or all three. [update: title modified to better match what most of the answers have been.] | Polymorphic Tree Walking, Python version #!/usr/bin/python
class Node: """base class, you should not process one of these""" def process(self): raise('you should not be processing a node')
class BinaryNode(Node): """base class for binary nodes""" def __init__(self, _left, _right): self.left = _left self.right = _right def process(self): raise('you should not be processing a binarynode')
class Plus(BinaryNode): def process(self): return self.left.process() + self.right.process()
class Minus(BinaryNode): def process(self): return self.left.process() - self.right.process()
class Mul(BinaryNode): def process(self): return self.left.process() * self.right.process()
class Div(BinaryNode): def process(self): return self.left.process() / self.right.process()
class Num(Node): def __init__(self, _value): self.value = _value def process(self): return self.value
def demo(n): print n.process()
demo(Num(2)) # 2 demo(Plus(Num(2),Num(5))) # 2 + 3 demo(Plus(Mul(Num(2),Num(3)),Div(Num(10),Num(5)))) # (2 * 3) + (10 / 2) The tests are just building up the binary trees by using constructors. program structure: abstract base class: Node all Nodes inherit from this class abstract base class: BinaryNode all binary operators inherit from this class process method does the work of evaluting the expression and returning the result binary operator classes: Plus,Minus,Mul,Div two child nodes, one each for left side and right side subexpressions number class: Num holds a leaf-node numeric value, e.g. 17 or 42 | Expression Evaluation and Tree Walking using polymorphism? (ala Steve Yegge) This morning, I was reading Steve Yegge's: When Polymorphism Fails, when I came across a question that a co-worker of his used to ask potential employees when they came for their interview at Amazon. As an example of polymorphism in action, let's look at the classic "eval" interview question, which (as far as I know) was brought to Amazon by Ron Braunstein. The question is quite a rich one, as it manages to probe a wide variety of important skills: OOP design, recursion, binary trees, polymorphism and runtime typing, general coding skills, and (if you want to make it extra hard) parsing theory. At some point, the candidate hopefully realizes that you can represent an arithmetic expression as a binary tree, assuming you're only using binary operators such as "+", "-", "*", "/". The leaf nodes are all numbers, and the internal nodes are all operators. Evaluating the expression means walking the tree. If the candidate doesn't realize this, you can gently lead them to it, or if necessary, just tell them. Even if you tell them, it's still an interesting problem. The first half of the question, which some people (whose names I will protect to my dying breath, but their initials are Willie Lewis) feel is a Job Requirement If You Want To Call Yourself A Developer And Work At Amazon, is actually kinda hard. The question is: how do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree. We may have an ADJ challenge on this question at some point. The second half is: let's say this is a 2-person project, and your partner, who we'll call "Willie", is responsible for transforming the string expression into a tree. You get the easy part: you need to decide what classes Willie is to construct the tree with. You can do it in any language, but make sure you pick one, or Willie will hand you assembly language. If he's feeling ornery, it will be for a processor that is no longer manufactured in production. You'd be amazed at how many candidates boff this one. I won't give away the answer, but a Standard Bad Solution involves the use of a switch or case statment (or just good old-fashioned cascaded-ifs). A Slightly Better Solution involves using a table of function pointers, and the Probably Best Solution involves using polymorphism. I encourage you to work through it sometime. Fun stuff! So, let's try to tackle the problem all three ways. How do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree using cascaded-if's, a table of function pointers, and/or polymorphism? Feel free to tackle one, two, or all three. [update: title modified to better match what most of the answers have been.] | TITLE:
Expression Evaluation and Tree Walking using polymorphism? (ala Steve Yegge)
QUESTION:
This morning, I was reading Steve Yegge's: When Polymorphism Fails, when I came across a question that a co-worker of his used to ask potential employees when they came for their interview at Amazon. As an example of polymorphism in action, let's look at the classic "eval" interview question, which (as far as I know) was brought to Amazon by Ron Braunstein. The question is quite a rich one, as it manages to probe a wide variety of important skills: OOP design, recursion, binary trees, polymorphism and runtime typing, general coding skills, and (if you want to make it extra hard) parsing theory. At some point, the candidate hopefully realizes that you can represent an arithmetic expression as a binary tree, assuming you're only using binary operators such as "+", "-", "*", "/". The leaf nodes are all numbers, and the internal nodes are all operators. Evaluating the expression means walking the tree. If the candidate doesn't realize this, you can gently lead them to it, or if necessary, just tell them. Even if you tell them, it's still an interesting problem. The first half of the question, which some people (whose names I will protect to my dying breath, but their initials are Willie Lewis) feel is a Job Requirement If You Want To Call Yourself A Developer And Work At Amazon, is actually kinda hard. The question is: how do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree. We may have an ADJ challenge on this question at some point. The second half is: let's say this is a 2-person project, and your partner, who we'll call "Willie", is responsible for transforming the string expression into a tree. You get the easy part: you need to decide what classes Willie is to construct the tree with. You can do it in any language, but make sure you pick one, or Willie will hand you assembly language. If he's feeling ornery, it will be for a processor that is no longer manufactured in production. You'd be amazed at how many candidates boff this one. I won't give away the answer, but a Standard Bad Solution involves the use of a switch or case statment (or just good old-fashioned cascaded-ifs). A Slightly Better Solution involves using a table of function pointers, and the Probably Best Solution involves using polymorphism. I encourage you to work through it sometime. Fun stuff! So, let's try to tackle the problem all three ways. How do you go from an arithmetic expression (e.g. in a string) such as "2 + (2)" to an expression tree using cascaded-if's, a table of function pointers, and/or polymorphism? Feel free to tackle one, two, or all three. [update: title modified to better match what most of the answers have been.]
ANSWER:
Polymorphic Tree Walking, Python version #!/usr/bin/python
class Node: """base class, you should not process one of these""" def process(self): raise('you should not be processing a node')
class BinaryNode(Node): """base class for binary nodes""" def __init__(self, _left, _right): self.left = _left self.right = _right def process(self): raise('you should not be processing a binarynode')
class Plus(BinaryNode): def process(self): return self.left.process() + self.right.process()
class Minus(BinaryNode): def process(self): return self.left.process() - self.right.process()
class Mul(BinaryNode): def process(self): return self.left.process() * self.right.process()
class Div(BinaryNode): def process(self): return self.left.process() / self.right.process()
class Num(Node): def __init__(self, _value): self.value = _value def process(self): return self.value
def demo(n): print n.process()
demo(Num(2)) # 2 demo(Plus(Num(2),Num(5))) # 2 + 3 demo(Plus(Mul(Num(2),Num(3)),Div(Num(10),Num(5)))) # (2 * 3) + (10 / 2) The tests are just building up the binary trees by using constructors. program structure: abstract base class: Node all Nodes inherit from this class abstract base class: BinaryNode all binary operators inherit from this class process method does the work of evaluting the expression and returning the result binary operator classes: Plus,Minus,Mul,Div two child nodes, one each for left side and right side subexpressions number class: Num holds a leaf-node numeric value, e.g. 17 or 42 | [
"oop",
"recursion",
"polymorphism",
"binary-tree"
] | 28 | 13 | 7,479 | 16 | 0 | 2008-08-15T17:31:39.427000 | 2008-08-15T22:56:41.813000 |
12,523 | 12,540 | Tools for automating mouse and keyboard events sent to a windows application | What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time. | Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success. | Tools for automating mouse and keyboard events sent to a windows application What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time. | TITLE:
Tools for automating mouse and keyboard events sent to a windows application
QUESTION:
What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time.
ANSWER:
Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success. | [
"testing",
"automation"
] | 10 | 7 | 14,538 | 5 | 0 | 2008-08-15T17:37:06.280000 | 2008-08-15T17:50:31.730000 |
12,556 | 16,222 | What is your experience using the TIBCO General Interface? | It looks interesting and I've played around with it some --- but the development IDE in a web browser seems to be nightmare eventually. Does anyone have experience using it and what are your thoughts? | We evaluated GI a few months ago for a project but didn't end up selecting it. The IDE-in-a-browser (which is itself build with GI) actually works surprisingly well, though there are some features you normally expect from an editor that it lacks, most notably (and irritatingly) an Undo command. It's also impossible to do things like subdocument includes (practically a necessity for team development) from the IDE, though you can do them manually in the underlying XML and the IDE will respect them. In the end the main reason we didn't go with it was that it was difficult to make the resulting web application look as good as the designers really wanted. It was relatively easy to build functionality, but the components were very restrictive in look and feel. The way GI renders its own document model to HTML involves a lot of style attributes which makes skinning in CSS all but impossible. It seems to prefer making web applications that look like applications, instead of web applications that look like websites. So it would probably be great for building intranet type applications where look and feel isn't a huge issue, but I probably wouldn't use it to make a public facing site. By the way for those that don't know, TIBCO GI is a completely separate product from the rest of TIBCO's SOA business integration stuff - General Interface was a separate company that was acquired by TIBCO a couple of years ago. | What is your experience using the TIBCO General Interface? It looks interesting and I've played around with it some --- but the development IDE in a web browser seems to be nightmare eventually. Does anyone have experience using it and what are your thoughts? | TITLE:
What is your experience using the TIBCO General Interface?
QUESTION:
It looks interesting and I've played around with it some --- but the development IDE in a web browser seems to be nightmare eventually. Does anyone have experience using it and what are your thoughts?
ANSWER:
We evaluated GI a few months ago for a project but didn't end up selecting it. The IDE-in-a-browser (which is itself build with GI) actually works surprisingly well, though there are some features you normally expect from an editor that it lacks, most notably (and irritatingly) an Undo command. It's also impossible to do things like subdocument includes (practically a necessity for team development) from the IDE, though you can do them manually in the underlying XML and the IDE will respect them. In the end the main reason we didn't go with it was that it was difficult to make the resulting web application look as good as the designers really wanted. It was relatively easy to build functionality, but the components were very restrictive in look and feel. The way GI renders its own document model to HTML involves a lot of style attributes which makes skinning in CSS all but impossible. It seems to prefer making web applications that look like applications, instead of web applications that look like websites. So it would probably be great for building intranet type applications where look and feel isn't a huge issue, but I probably wouldn't use it to make a public facing site. By the way for those that don't know, TIBCO GI is a completely separate product from the rest of TIBCO's SOA business integration stuff - General Interface was a separate company that was acquired by TIBCO a couple of years ago. | [
"cross-browser",
"tibco"
] | 8 | 11 | 3,066 | 5 | 0 | 2008-08-15T18:00:04.020000 | 2008-08-19T14:06:46.810000 |
12,565 | 13,935 | What do the different brackets in Ruby mean? | In Ruby, what's the difference between {} and []? {} seems to be used for both code blocks and hashes. Are [] only for arrays? The documention isn't very clear. | It depends on the context: When on their own, or assigning to a variable, [] creates arrays, and {} creates hashes. e.g. a = [1,2,3] # an array b = {1 => 2} # a hash [] can be overridden as a custom method, and is generally used to fetch things from hashes (the standard library sets up [] as a method on hashes which is the same as fetch ) There is also a convention that it is used as a class method in the same way you might use a static Create method in C# or Java. e.g. a = {1 => 2} # create a hash for example puts a[1] # same as a.fetch(1), will print 2
Hash[1,2,3,4] # this is a custom class method which creates a new hash See the Ruby Hash docs for that last example. This is probably the most tricky one - {} is also syntax for blocks, but only when passed to a method OUTSIDE the arguments parens. When you invoke methods without parens, Ruby looks at where you put the commas to figure out where the arguments end (where the parens would have been, had you typed them) 1.upto(2) { puts 'hello' } # it's a block 1.upto 2 { puts 'hello' } # syntax error, ruby can't figure out where the function args end 1.upto 2, { puts 'hello' } # the comma means "argument", so ruby sees it as a hash - this won't work because puts 'hello' isn't a valid hash | What do the different brackets in Ruby mean? In Ruby, what's the difference between {} and []? {} seems to be used for both code blocks and hashes. Are [] only for arrays? The documention isn't very clear. | TITLE:
What do the different brackets in Ruby mean?
QUESTION:
In Ruby, what's the difference between {} and []? {} seems to be used for both code blocks and hashes. Are [] only for arrays? The documention isn't very clear.
ANSWER:
It depends on the context: When on their own, or assigning to a variable, [] creates arrays, and {} creates hashes. e.g. a = [1,2,3] # an array b = {1 => 2} # a hash [] can be overridden as a custom method, and is generally used to fetch things from hashes (the standard library sets up [] as a method on hashes which is the same as fetch ) There is also a convention that it is used as a class method in the same way you might use a static Create method in C# or Java. e.g. a = {1 => 2} # create a hash for example puts a[1] # same as a.fetch(1), will print 2
Hash[1,2,3,4] # this is a custom class method which creates a new hash See the Ruby Hash docs for that last example. This is probably the most tricky one - {} is also syntax for blocks, but only when passed to a method OUTSIDE the arguments parens. When you invoke methods without parens, Ruby looks at where you put the commas to figure out where the arguments end (where the parens would have been, had you typed them) 1.upto(2) { puts 'hello' } # it's a block 1.upto 2 { puts 'hello' } # syntax error, ruby can't figure out where the function args end 1.upto 2, { puts 'hello' } # the comma means "argument", so ruby sees it as a hash - this won't work because puts 'hello' isn't a valid hash | [
"ruby",
"syntax"
] | 88 | 76 | 54,725 | 6 | 0 | 2008-08-15T18:09:52.267000 | 2008-08-17T21:17:44.680000 |
12,569 | 12,571 | Rigor in capturing test cases for unit testing | Let's say we have a simple function defined in a pseudo language. List SortNumbers(List unsorted, bool ascending); We pass in an unsorted list of numbers and a boolean specifying ascending or descending sort order. In return, we get a sorted list of numbers. In my experience, some people are better at capturing boundary conditions than others. The question is, "How do you know when you are 'done' capturing test cases"? We can start listing cases now and some clever person will undoubtedly think of 'one more' case that isn't covered by any of the previous. | Don't waste too much time trying to think of every boundry condition. Your tests won't be able to catch every bug first time around. The idea is to have tests that are pretty good, and then each time a bug does surface, write a new test specifically for that bug so that you never hear from it again. Another note I want to make about code coverage tools. In a language like C# or Java where your have many get/set and similar methods, you should not be shooting for 100% coverage. That means you are wasting too much time writing tests for trivial code. You only want 100% coverage on your complex business logic. If your full codebase is closer to 70-80% coverage, you are doing a good job. If your code coverage tool allows multiple coverage metrics, the best one is 'block coverage' which measures coverage of 'basic blocks'. Other types are class and method coverage (which don't give you as much information) and line coverage (which is too fine grain). | Rigor in capturing test cases for unit testing Let's say we have a simple function defined in a pseudo language. List SortNumbers(List unsorted, bool ascending); We pass in an unsorted list of numbers and a boolean specifying ascending or descending sort order. In return, we get a sorted list of numbers. In my experience, some people are better at capturing boundary conditions than others. The question is, "How do you know when you are 'done' capturing test cases"? We can start listing cases now and some clever person will undoubtedly think of 'one more' case that isn't covered by any of the previous. | TITLE:
Rigor in capturing test cases for unit testing
QUESTION:
Let's say we have a simple function defined in a pseudo language. List SortNumbers(List unsorted, bool ascending); We pass in an unsorted list of numbers and a boolean specifying ascending or descending sort order. In return, we get a sorted list of numbers. In my experience, some people are better at capturing boundary conditions than others. The question is, "How do you know when you are 'done' capturing test cases"? We can start listing cases now and some clever person will undoubtedly think of 'one more' case that isn't covered by any of the previous.
ANSWER:
Don't waste too much time trying to think of every boundry condition. Your tests won't be able to catch every bug first time around. The idea is to have tests that are pretty good, and then each time a bug does surface, write a new test specifically for that bug so that you never hear from it again. Another note I want to make about code coverage tools. In a language like C# or Java where your have many get/set and similar methods, you should not be shooting for 100% coverage. That means you are wasting too much time writing tests for trivial code. You only want 100% coverage on your complex business logic. If your full codebase is closer to 70-80% coverage, you are doing a good job. If your code coverage tool allows multiple coverage metrics, the best one is 'block coverage' which measures coverage of 'basic blocks'. Other types are class and method coverage (which don't give you as much information) and line coverage (which is too fine grain). | [
"unit-testing",
"testing",
"sorting"
] | 6 | 10 | 1,217 | 5 | 0 | 2008-08-15T18:14:28.327000 | 2008-08-15T18:17:17.387000 |
12,578 | 47,130 | No trace info during processing of a cube in SSAS | When I process a cube in Visual Studio 2005 I get following message: Process succeeded. Trace information is still being transferred. If you do not want to wait for all of the information to arrive press Stop. and no trace info is displayed. Cube is processed OK by it is a little bit annoying. Any ideas? I access cubes via web server. | I get the same message when I process a cube, but if I wait for a few seconds the trace information arrives. Are you dealing with a very large quantity of data or a very complex cube? Maybe this is a silly question, but have you tried waiting a few minutes? | No trace info during processing of a cube in SSAS When I process a cube in Visual Studio 2005 I get following message: Process succeeded. Trace information is still being transferred. If you do not want to wait for all of the information to arrive press Stop. and no trace info is displayed. Cube is processed OK by it is a little bit annoying. Any ideas? I access cubes via web server. | TITLE:
No trace info during processing of a cube in SSAS
QUESTION:
When I process a cube in Visual Studio 2005 I get following message: Process succeeded. Trace information is still being transferred. If you do not want to wait for all of the information to arrive press Stop. and no trace info is displayed. Cube is processed OK by it is a little bit annoying. Any ideas? I access cubes via web server.
ANSWER:
I get the same message when I process a cube, but if I wait for a few seconds the trace information arrives. Are you dealing with a very large quantity of data or a very complex cube? Maybe this is a silly question, but have you tried waiting a few minutes? | [
"sql-server",
"visual-studio-2005",
"ssas",
"trace",
"olap"
] | 5 | 1 | 3,132 | 3 | 0 | 2008-08-15T18:28:45.350000 | 2008-09-06T00:23:15.010000 |
12,592 | 12,609 | Can you check that an exception is thrown with doctest in Python? | Is it possible to write a doctest unit test that will check that an exception is raised? For example, if I have a function foo(x) that is supposed to raise an exception if x < 0, how would I write the doctest for that? | Yes. You can do it. The doctest module documentation and Wikipedia has an example of it. >>> x Traceback (most recent call last):... NameError: name 'x' is not defined | Can you check that an exception is thrown with doctest in Python? Is it possible to write a doctest unit test that will check that an exception is raised? For example, if I have a function foo(x) that is supposed to raise an exception if x < 0, how would I write the doctest for that? | TITLE:
Can you check that an exception is thrown with doctest in Python?
QUESTION:
Is it possible to write a doctest unit test that will check that an exception is raised? For example, if I have a function foo(x) that is supposed to raise an exception if x < 0, how would I write the doctest for that?
ANSWER:
Yes. You can do it. The doctest module documentation and Wikipedia has an example of it. >>> x Traceback (most recent call last):... NameError: name 'x' is not defined | [
"python",
"doctest"
] | 88 | 117 | 21,181 | 3 | 0 | 2008-08-15T18:43:17.177000 | 2008-08-15T19:04:13.093000 |
12,593 | 12,596 | Does System.Xml use MSXML? | I'm developing a C# application that uses a handful of XML files and some classes in System.Xml. A coworker insists on adding the MSXML6 redistributable to our install, along with the.NET framework but I don't think the.NET framework uses or needs MSXML in anyway. I am well aware that using MSXML from.NET is not supported but I suppose its theoretically possible for System.Xml itself to wrap MSXML at a low level. I haven't found anything definitive that.NET has its own implementation but neither can I find anything to suggest it needs MSXML. Help me settle the debate. Does System.Xml use MSXML? | System.Xml doesn't use MSXML6. They are seperate xml processing engines. See post here: MSXML 6.0 vs. System.Xml: Schema handling differences | Does System.Xml use MSXML? I'm developing a C# application that uses a handful of XML files and some classes in System.Xml. A coworker insists on adding the MSXML6 redistributable to our install, along with the.NET framework but I don't think the.NET framework uses or needs MSXML in anyway. I am well aware that using MSXML from.NET is not supported but I suppose its theoretically possible for System.Xml itself to wrap MSXML at a low level. I haven't found anything definitive that.NET has its own implementation but neither can I find anything to suggest it needs MSXML. Help me settle the debate. Does System.Xml use MSXML? | TITLE:
Does System.Xml use MSXML?
QUESTION:
I'm developing a C# application that uses a handful of XML files and some classes in System.Xml. A coworker insists on adding the MSXML6 redistributable to our install, along with the.NET framework but I don't think the.NET framework uses or needs MSXML in anyway. I am well aware that using MSXML from.NET is not supported but I suppose its theoretically possible for System.Xml itself to wrap MSXML at a low level. I haven't found anything definitive that.NET has its own implementation but neither can I find anything to suggest it needs MSXML. Help me settle the debate. Does System.Xml use MSXML?
ANSWER:
System.Xml doesn't use MSXML6. They are seperate xml processing engines. See post here: MSXML 6.0 vs. System.Xml: Schema handling differences | [
".net",
"xml",
"msxml"
] | 11 | 14 | 3,627 | 5 | 0 | 2008-08-15T18:45:07.003000 | 2008-08-15T18:47:01.617000 |
12,594 | 12,599 | Windows/C++: How do I determine the share name associated with a shared drive? | Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$ ) is. To find out if it's shared, I can use NetShareCheck. How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input. | If all else fails, you could always use NetShareEnum and call NetShareGetInfo on each. | Windows/C++: How do I determine the share name associated with a shared drive? Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$ ) is. To find out if it's shared, I can use NetShareCheck. How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input. | TITLE:
Windows/C++: How do I determine the share name associated with a shared drive?
QUESTION:
Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$ ) is. To find out if it's shared, I can use NetShareCheck. How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input.
ANSWER:
If all else fails, you could always use NetShareEnum and call NetShareGetInfo on each. | [
"c++",
"windows",
"networking",
"share"
] | 1 | 3 | 1,712 | 3 | 0 | 2008-08-15T18:45:36.190000 | 2008-08-15T18:55:15.297000 |
12,603 | 12,606 | Why was the Profile provider not built into Web Apps? | If you create an ASP.NET web file project you have direct access to the Profile information in the web.config file. If you convert that to a Web App and have been using ProfileCommon etc. then you have to jump through a whole bunch of hoops to get your web app to work. Why wasn't the Profile provider built into the ASP.NET web app projects like it was with the web file projects? | The profile provider uses the ASP.NET Build Provider system, which doesn't work with Web Application Projects. Adding a customized BuildProvider class to the Web.config file works in an ASP.NET Web site but does not work in an ASP.NET Web application project. In a Web application project, the code that is generated by the BuildProvider class cannot be included in the application. source: MSDN Build Provider documentation | Why was the Profile provider not built into Web Apps? If you create an ASP.NET web file project you have direct access to the Profile information in the web.config file. If you convert that to a Web App and have been using ProfileCommon etc. then you have to jump through a whole bunch of hoops to get your web app to work. Why wasn't the Profile provider built into the ASP.NET web app projects like it was with the web file projects? | TITLE:
Why was the Profile provider not built into Web Apps?
QUESTION:
If you create an ASP.NET web file project you have direct access to the Profile information in the web.config file. If you convert that to a Web App and have been using ProfileCommon etc. then you have to jump through a whole bunch of hoops to get your web app to work. Why wasn't the Profile provider built into the ASP.NET web app projects like it was with the web file projects?
ANSWER:
The profile provider uses the ASP.NET Build Provider system, which doesn't work with Web Application Projects. Adding a customized BuildProvider class to the Web.config file works in an ASP.NET Web site but does not work in an ASP.NET Web application project. In a Web application project, the code that is generated by the BuildProvider class cannot be included in the application. source: MSDN Build Provider documentation | [
"asp.net"
] | 2 | 3 | 1,016 | 2 | 0 | 2008-08-15T18:58:54.430000 | 2008-08-15T19:02:40.430000 |
12,612 | 12,707 | Access Control Lists & Access Control Objects, good tutorial? | we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc... I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project. Anyone know where I can find good information to work from? | A brief rundown on ACLs, where they should be used and how they should be structured and implemented for various applications and user levels can be found here: LINK | Access Control Lists & Access Control Objects, good tutorial? we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc... I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project. Anyone know where I can find good information to work from? | TITLE:
Access Control Lists & Access Control Objects, good tutorial?
QUESTION:
we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc... I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project. Anyone know where I can find good information to work from?
ANSWER:
A brief rundown on ACLs, where they should be used and how they should be structured and implemented for various applications and user levels can be found here: LINK | [
"permissions",
"acl"
] | 2 | 2 | 850 | 3 | 0 | 2008-08-15T19:11:31.800000 | 2008-08-15T20:31:23.557000 |
12,633 | 12,677 | What is the easiest way to parse an INI File in C++? | I'm trying to parse an INI file using C++. Any tips on what is the best way to achieve this? Should I use the Windows API tools for INI file processing (with which I am totally unfamiliar), an open-source solution or attempt to parse it manually? | You can use the Windows API functions, such as GetPrivateProfileString() and GetPrivateProfileInt(). | What is the easiest way to parse an INI File in C++? I'm trying to parse an INI file using C++. Any tips on what is the best way to achieve this? Should I use the Windows API tools for INI file processing (with which I am totally unfamiliar), an open-source solution or attempt to parse it manually? | TITLE:
What is the easiest way to parse an INI File in C++?
QUESTION:
I'm trying to parse an INI file using C++. Any tips on what is the best way to achieve this? Should I use the Windows API tools for INI file processing (with which I am totally unfamiliar), an open-source solution or attempt to parse it manually?
ANSWER:
You can use the Windows API functions, such as GetPrivateProfileString() and GetPrivateProfileInt(). | [
"c++",
"winapi",
"ini",
"fileparse"
] | 93 | 120 | 158,036 | 13 | 0 | 2008-08-15T19:31:01.747000 | 2008-08-15T20:06:53.140000 |
12,642 | 12,691 | Upload binary data with Silverlight 2b2 | I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both WebClient and WebRequest both have their problems. WebClient Nice and easy but you do not get any notification that the asynchronous upload has completed, and the UploadProgressChanged event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use UploadStringASync because then at least you get a UploadStringCompleted, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go. HttpWebRequest Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right. Normal.net does have some appropriate WebClient methods for OnUploadDataCompleted and progress but these arent available in Silverlight.net... big omission I think! Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload. Look forward to some help with this. | The way i get around it is through INotifyPropertyChanged and event notification. The essentials: public void DoIt(){ this.IsUploading = True;
WebRequest postRequest = WebRequest.Create(new Uri(ServiceURL));
postRequest.BeginGetRequestStream(new AsyncCallback(RequestOpened), postRequest); }
private void RequestOpened(IAsyncResult result){ WebRequest req = result.AsyncState as WebRequest; req.BeginGetResponse(new AsyncCallback(GetResponse), req); }
private void GetResponse(IAsyncResult result) { WebRequest req = result.AsyncState as WebRequest; string serverresult = string.Empty; WebResponse postResponse = req.EndGetResponse(result);
StreamReader responseReader = new StreamReader(postResponse.GetResponseStream());
this.IsUploading= False; }
private Bool_IsUploading; public Bool IsUploading { get { return _IsUploading; } private set {
_IsUploading = value;
OnPropertyChanged("IsUploading"); } } Right now silverlight is a PiTA because of the double and triple Async calls. | Upload binary data with Silverlight 2b2 I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both WebClient and WebRequest both have their problems. WebClient Nice and easy but you do not get any notification that the asynchronous upload has completed, and the UploadProgressChanged event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use UploadStringASync because then at least you get a UploadStringCompleted, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go. HttpWebRequest Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right. Normal.net does have some appropriate WebClient methods for OnUploadDataCompleted and progress but these arent available in Silverlight.net... big omission I think! Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload. Look forward to some help with this. | TITLE:
Upload binary data with Silverlight 2b2
QUESTION:
I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both WebClient and WebRequest both have their problems. WebClient Nice and easy but you do not get any notification that the asynchronous upload has completed, and the UploadProgressChanged event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use UploadStringASync because then at least you get a UploadStringCompleted, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go. HttpWebRequest Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right. Normal.net does have some appropriate WebClient methods for OnUploadDataCompleted and progress but these arent available in Silverlight.net... big omission I think! Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload. Look forward to some help with this.
ANSWER:
The way i get around it is through INotifyPropertyChanged and event notification. The essentials: public void DoIt(){ this.IsUploading = True;
WebRequest postRequest = WebRequest.Create(new Uri(ServiceURL));
postRequest.BeginGetRequestStream(new AsyncCallback(RequestOpened), postRequest); }
private void RequestOpened(IAsyncResult result){ WebRequest req = result.AsyncState as WebRequest; req.BeginGetResponse(new AsyncCallback(GetResponse), req); }
private void GetResponse(IAsyncResult result) { WebRequest req = result.AsyncState as WebRequest; string serverresult = string.Empty; WebResponse postResponse = req.EndGetResponse(result);
StreamReader responseReader = new StreamReader(postResponse.GetResponseStream());
this.IsUploading= False; }
private Bool_IsUploading; public Bool IsUploading { get { return _IsUploading; } private set {
_IsUploading = value;
OnPropertyChanged("IsUploading"); } } Right now silverlight is a PiTA because of the double and triple Async calls. | [
".net",
"silverlight"
] | 4 | 1 | 1,748 | 4 | 0 | 2008-08-15T19:37:57.863000 | 2008-08-15T20:17:44.257000 |
12,647 | 28,589 | How do I tell if a variable has a numeric value in Perl? | Is there a simple way in Perl that will allow me to determine if a given variable is numeric? Something along the lines of: if (is_number($x)) {... } would be ideal. A technique that won't throw warnings when the -w switch is being used is certainly preferred. | Use Scalar::Util::looks_like_number() which uses the internal Perl C API's looks_like_number() function, which is probably the most efficient way to do this. Note that the strings "inf" and "infinity" are treated as numbers. Example: #!/usr/bin/perl
use warnings; use strict;
use Scalar::Util qw(looks_like_number);
my @exprs = qw(1 5.25 0.001 1.3e8 foo bar 1dd inf infinity);
foreach my $expr (@exprs) { print "$expr is", looks_like_number($expr)? '': ' not', " a number\n"; } Gives this output: 1 is a number 5.25 is a number 0.001 is a number 1.3e8 is a number foo is not a number bar is not a number 1dd is not a number inf is a number infinity is a number See also: perldoc Scalar::Util perldoc perlapi for looks_like_number | How do I tell if a variable has a numeric value in Perl? Is there a simple way in Perl that will allow me to determine if a given variable is numeric? Something along the lines of: if (is_number($x)) {... } would be ideal. A technique that won't throw warnings when the -w switch is being used is certainly preferred. | TITLE:
How do I tell if a variable has a numeric value in Perl?
QUESTION:
Is there a simple way in Perl that will allow me to determine if a given variable is numeric? Something along the lines of: if (is_number($x)) {... } would be ideal. A technique that won't throw warnings when the -w switch is being used is certainly preferred.
ANSWER:
Use Scalar::Util::looks_like_number() which uses the internal Perl C API's looks_like_number() function, which is probably the most efficient way to do this. Note that the strings "inf" and "infinity" are treated as numbers. Example: #!/usr/bin/perl
use warnings; use strict;
use Scalar::Util qw(looks_like_number);
my @exprs = qw(1 5.25 0.001 1.3e8 foo bar 1dd inf infinity);
foreach my $expr (@exprs) { print "$expr is", looks_like_number($expr)? '': ' not', " a number\n"; } Gives this output: 1 is a number 5.25 is a number 0.001 is a number 1.3e8 is a number foo is not a number bar is not a number 1dd is not a number inf is a number infinity is a number See also: perldoc Scalar::Util perldoc perlapi for looks_like_number | [
"perl",
"numbers"
] | 102 | 151 | 163,968 | 15 | 0 | 2008-08-15T19:43:15.293000 | 2008-08-26T16:53:34.377000 |
12,656 | 13,092 | Variable parameter/result binding with prepared statements | In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can. Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods. Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind. Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements. Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure). | In PHP you can pass a variable number of arguments to a function or method by using call_user_func_array. An example for a method would be: call_user_func_array(array(&$stmt, 'bindparams'), $array_of_params); The function will be called with each member in the array passed as its own argument. | Variable parameter/result binding with prepared statements In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can. Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods. Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind. Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements. Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure). | TITLE:
Variable parameter/result binding with prepared statements
QUESTION:
In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can. Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods. Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind. Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements. Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure).
ANSWER:
In PHP you can pass a variable number of arguments to a function or method by using call_user_func_array. An example for a method would be: call_user_func_array(array(&$stmt, 'bindparams'), $array_of_params); The function will be called with each member in the array passed as its own argument. | [
"php",
"mysql",
"mysqli"
] | 9 | 12 | 5,989 | 5 | 0 | 2008-08-15T19:50:52.517000 | 2008-08-16T10:58:05.587000 |
12,657 | 12,735 | Can I create a ListView with dynamic GroupItemCount? | I'm using the new ASP.Net ListView control to list database items that will be grouped together in sections based on one of their columns like so: region1 store1 store2 store3 region2 store4 region3 store5 store6 Is this possible to do with the ListView's GroupItemTemplate? Every example I have seen uses a static number of items per group, which won't work for me. Am I misunderstanding the purpose of the GroupItem? | I haven't used GroupItemCount, but I have taken this example written up by Matt Berseth titled Building a Grouping Grid with the ASP.NET 3.5 LinqDataSource and ListView Controls and have grouped items by a key just like you want. It involves using an outer and inner ListView control. Works great, give it a try. | Can I create a ListView with dynamic GroupItemCount? I'm using the new ASP.Net ListView control to list database items that will be grouped together in sections based on one of their columns like so: region1 store1 store2 store3 region2 store4 region3 store5 store6 Is this possible to do with the ListView's GroupItemTemplate? Every example I have seen uses a static number of items per group, which won't work for me. Am I misunderstanding the purpose of the GroupItem? | TITLE:
Can I create a ListView with dynamic GroupItemCount?
QUESTION:
I'm using the new ASP.Net ListView control to list database items that will be grouped together in sections based on one of their columns like so: region1 store1 store2 store3 region2 store4 region3 store5 store6 Is this possible to do with the ListView's GroupItemTemplate? Every example I have seen uses a static number of items per group, which won't work for me. Am I misunderstanding the purpose of the GroupItem?
ANSWER:
I haven't used GroupItemCount, but I have taken this example written up by Matt Berseth titled Building a Grouping Grid with the ASP.NET 3.5 LinqDataSource and ListView Controls and have grouped items by a key just like you want. It involves using an outer and inner ListView control. Works great, give it a try. | [
"asp.net",
".net-3.5",
"listview"
] | 2 | 2 | 4,233 | 3 | 0 | 2008-08-15T19:51:05.417000 | 2008-08-15T20:56:08.033000 |
12,661 | 12,822 | Efficient JPEG Image Resizing in PHP | What's the most efficient way to resize large images in PHP? I'm currently using the GD function imagecopyresampled to take high resolution images, and cleanly resize them down to a size for web viewing (roughly 700 pixels wide by 700 pixels tall). This works great on small (under 2 MB) photos and the entire resize operation takes less than a second on the server. However, the site will eventually service photographers who may be uploading images up to 10 MB in size (or images up to 5000x4000 pixels in size). Doing this kind of resize operation with large images tends to increase the memory usage by a very large margin (larger images can spike the memory usage for the script past 80 MB). Is there any way to make this resize operation more efficient? Should I be using an alternate image library such as ImageMagick? Right now, the resize code looks something like this function makeThumbnail($sourcefile, $endfile, $thumbwidth, $thumbheight, $quality) { // Takes the sourcefile (path/to/image.jpg) and makes a thumbnail from it // and places it at endfile (path/to/thumb.jpg).
// Load image and get image size. $img = imagecreatefromjpeg($sourcefile); $width = imagesx( $img ); $height = imagesy( $img );
if ($width > $height) { $newwidth = $thumbwidth; $divisor = $width / $thumbwidth; $newheight = floor( $height / $divisor); } else { $newheight = $thumbheight; $divisor = $height / $thumbheight; $newwidth = floor( $width / $divisor ); }
// Create a new temporary image. $tmpimg = imagecreatetruecolor( $newwidth, $newheight );
// Copy and resize old image into new image. imagecopyresampled( $tmpimg, $img, 0, 0, 0, 0, $newwidth, $newheight, $width, $height );
// Save thumbnail into a file. imagejpeg( $tmpimg, $endfile, $quality);
// release the memory imagedestroy($tmpimg); imagedestroy($img); | People say that ImageMagick is much faster. At best just compare both libraries and measure that. Prepare 1000 typical images. Write two scripts -- one for GD, one for ImageMagick. Run both of them a few times. Compare results (total execution time, CPU and I/O usage, result image quality). Something which the best everyone else, could not be the best for you. Also, in my opinion, ImageMagick has much better API interface. | Efficient JPEG Image Resizing in PHP What's the most efficient way to resize large images in PHP? I'm currently using the GD function imagecopyresampled to take high resolution images, and cleanly resize them down to a size for web viewing (roughly 700 pixels wide by 700 pixels tall). This works great on small (under 2 MB) photos and the entire resize operation takes less than a second on the server. However, the site will eventually service photographers who may be uploading images up to 10 MB in size (or images up to 5000x4000 pixels in size). Doing this kind of resize operation with large images tends to increase the memory usage by a very large margin (larger images can spike the memory usage for the script past 80 MB). Is there any way to make this resize operation more efficient? Should I be using an alternate image library such as ImageMagick? Right now, the resize code looks something like this function makeThumbnail($sourcefile, $endfile, $thumbwidth, $thumbheight, $quality) { // Takes the sourcefile (path/to/image.jpg) and makes a thumbnail from it // and places it at endfile (path/to/thumb.jpg).
// Load image and get image size. $img = imagecreatefromjpeg($sourcefile); $width = imagesx( $img ); $height = imagesy( $img );
if ($width > $height) { $newwidth = $thumbwidth; $divisor = $width / $thumbwidth; $newheight = floor( $height / $divisor); } else { $newheight = $thumbheight; $divisor = $height / $thumbheight; $newwidth = floor( $width / $divisor ); }
// Create a new temporary image. $tmpimg = imagecreatetruecolor( $newwidth, $newheight );
// Copy and resize old image into new image. imagecopyresampled( $tmpimg, $img, 0, 0, 0, 0, $newwidth, $newheight, $width, $height );
// Save thumbnail into a file. imagejpeg( $tmpimg, $endfile, $quality);
// release the memory imagedestroy($tmpimg); imagedestroy($img); | TITLE:
Efficient JPEG Image Resizing in PHP
QUESTION:
What's the most efficient way to resize large images in PHP? I'm currently using the GD function imagecopyresampled to take high resolution images, and cleanly resize them down to a size for web viewing (roughly 700 pixels wide by 700 pixels tall). This works great on small (under 2 MB) photos and the entire resize operation takes less than a second on the server. However, the site will eventually service photographers who may be uploading images up to 10 MB in size (or images up to 5000x4000 pixels in size). Doing this kind of resize operation with large images tends to increase the memory usage by a very large margin (larger images can spike the memory usage for the script past 80 MB). Is there any way to make this resize operation more efficient? Should I be using an alternate image library such as ImageMagick? Right now, the resize code looks something like this function makeThumbnail($sourcefile, $endfile, $thumbwidth, $thumbheight, $quality) { // Takes the sourcefile (path/to/image.jpg) and makes a thumbnail from it // and places it at endfile (path/to/thumb.jpg).
// Load image and get image size. $img = imagecreatefromjpeg($sourcefile); $width = imagesx( $img ); $height = imagesy( $img );
if ($width > $height) { $newwidth = $thumbwidth; $divisor = $width / $thumbwidth; $newheight = floor( $height / $divisor); } else { $newheight = $thumbheight; $divisor = $height / $thumbheight; $newwidth = floor( $width / $divisor ); }
// Create a new temporary image. $tmpimg = imagecreatetruecolor( $newwidth, $newheight );
// Copy and resize old image into new image. imagecopyresampled( $tmpimg, $img, 0, 0, 0, 0, $newwidth, $newheight, $width, $height );
// Save thumbnail into a file. imagejpeg( $tmpimg, $endfile, $quality);
// release the memory imagedestroy($tmpimg); imagedestroy($img);
ANSWER:
People say that ImageMagick is much faster. At best just compare both libraries and measure that. Prepare 1000 typical images. Write two scripts -- one for GD, one for ImageMagick. Run both of them a few times. Compare results (total execution time, CPU and I/O usage, result image quality). Something which the best everyone else, could not be the best for you. Also, in my opinion, ImageMagick has much better API interface. | [
"php",
"image",
"gd",
"jpeg"
] | 84 | 48 | 90,080 | 9 | 0 | 2008-08-15T19:55:17 | 2008-08-15T22:08:38.993000 |
12,669 | 12,693 | Resources for getting started with web development? | Let's say I woke up today and wanted to create a clone of StackOverflow.com, and reap the financial windfall of millions $0.02 ad clicks. Where do I start? My understanding of web technologies are: HTML is what is ultimately displayed CSS is a mechanism for making HTML look pleasing ASP.NET lets you add functionality using.NET(?) JavaScript does stuff AJAX does asyncronous stuff... and the list goes on! To write a good website to I just need to buy seven books and read them all? Are Web 2.0 sites really the synergy of all these technologies? Where does someone go to get started down the path to creating professional-looking web sites, and what steps are there along the way. | I think that this series of Opera Articles will give you a good idea of web standards and basic concepts of web development. 2014 update: the Opera docs were relocated in 2012 to this section of webplatform.org: http://docs.webplatform.org/wiki/Main_Page | Resources for getting started with web development? Let's say I woke up today and wanted to create a clone of StackOverflow.com, and reap the financial windfall of millions $0.02 ad clicks. Where do I start? My understanding of web technologies are: HTML is what is ultimately displayed CSS is a mechanism for making HTML look pleasing ASP.NET lets you add functionality using.NET(?) JavaScript does stuff AJAX does asyncronous stuff... and the list goes on! To write a good website to I just need to buy seven books and read them all? Are Web 2.0 sites really the synergy of all these technologies? Where does someone go to get started down the path to creating professional-looking web sites, and what steps are there along the way. | TITLE:
Resources for getting started with web development?
QUESTION:
Let's say I woke up today and wanted to create a clone of StackOverflow.com, and reap the financial windfall of millions $0.02 ad clicks. Where do I start? My understanding of web technologies are: HTML is what is ultimately displayed CSS is a mechanism for making HTML look pleasing ASP.NET lets you add functionality using.NET(?) JavaScript does stuff AJAX does asyncronous stuff... and the list goes on! To write a good website to I just need to buy seven books and read them all? Are Web 2.0 sites really the synergy of all these technologies? Where does someone go to get started down the path to creating professional-looking web sites, and what steps are there along the way.
ANSWER:
I think that this series of Opera Articles will give you a good idea of web standards and basic concepts of web development. 2014 update: the Opera docs were relocated in 2012 to this section of webplatform.org: http://docs.webplatform.org/wiki/Main_Page | [
"language-agnostic"
] | 14 | 9 | 8,107 | 10 | 0 | 2008-08-15T20:00:28.660000 | 2008-08-15T20:18:46.860000 |
12,671 | 12,700 | How can I pass data from an aspx page to an ascx modal popup? | I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar. I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page. When the button is clicked, the modal popup opens but the Button_Click event is never fired, so the modal doesn't get its session data. Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup. Any thoughts? | All a usercontrol(.ascx) file is is a set of controls that you have grouped together to provide some reusable functionality. The controls defined in it are still added to the page's control collection (.aspx) durring the page lifecylce. The ModalPopupExtender uses javascript and dhtml to show and hide the controls in the usercontrol client-side. What you are seeing is that the click event is being handled client-side by the ModalPoupExtender and it is canceling the post-back to the server. This is the default behavior by design. You certainly can access the page's control collection from the code-behind of your usercontrol though because it is all part of the same control tree. Just use the FindControl(xxx) method of any control to search for the child of it you need. | How can I pass data from an aspx page to an ascx modal popup? I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar. I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page. When the button is clicked, the modal popup opens but the Button_Click event is never fired, so the modal doesn't get its session data. Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup. Any thoughts? | TITLE:
How can I pass data from an aspx page to an ascx modal popup?
QUESTION:
I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar. I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page. When the button is clicked, the modal popup opens but the Button_Click event is never fired, so the modal doesn't get its session data. Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup. Any thoughts?
ANSWER:
All a usercontrol(.ascx) file is is a set of controls that you have grouped together to provide some reusable functionality. The controls defined in it are still added to the page's control collection (.aspx) durring the page lifecylce. The ModalPopupExtender uses javascript and dhtml to show and hide the controls in the usercontrol client-side. What you are seeing is that the click event is being handled client-side by the ModalPoupExtender and it is canceling the post-back to the server. This is the default behavior by design. You certainly can access the page's control collection from the code-behind of your usercontrol though because it is all part of the same control tree. Just use the FindControl(xxx) method of any control to search for the child of it you need. | [
"c#",
"asp.net",
"asp.net-ajax"
] | 6 | 3 | 6,897 | 3 | 0 | 2008-08-15T20:01:23.010000 | 2008-08-15T20:25:28.017000 |
12,692 | 12,713 | IronPython and ASP.NET | Has anyone built a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time? | The current version of ASP.NET integration for IronPython is not very up-to-date and is more of a "proof-of-concept." I don't think I'd build a production website based on it. Edit:: I have a very high level of expectation for how things like this should work, and might setting the bar a little high. Maybe you should take what's in "ASP.NET Futures", write a test application for it and see how it works for you. If you're successful, I'd like to hear about it. Otherwise, I think there should be a newer CTP of this in the next six months. (I'm a developer on IronPython and IronRuby.) Edit 2: Since I originally posted this, a newer version has been released. | IronPython and ASP.NET Has anyone built a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time? | TITLE:
IronPython and ASP.NET
QUESTION:
Has anyone built a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time?
ANSWER:
The current version of ASP.NET integration for IronPython is not very up-to-date and is more of a "proof-of-concept." I don't think I'd build a production website based on it. Edit:: I have a very high level of expectation for how things like this should work, and might setting the bar a little high. Maybe you should take what's in "ASP.NET Futures", write a test application for it and see how it works for you. If you're successful, I'd like to hear about it. Otherwise, I think there should be a newer CTP of this in the next six months. (I'm a developer on IronPython and IronRuby.) Edit 2: Since I originally posted this, a newer version has been released. | [
"asp.net",
"ironpython"
] | 7 | 7 | 755 | 3 | 0 | 2008-08-15T20:17:49.450000 | 2008-08-15T20:37:24.920000 |
12,702 | 42,332 | Returning DataTables in WCF/.NET | I have a WCF service from which I want to return a DataTable. I know that this is often a highly-debated topic, as far as whether or not returning DataTables is a good practice. Let's put that aside for a moment. When I create a DataTable from scratch, as below, there are no problems whatsoever. The table is created, populated, and returned to the client, and all is well: [DataContract] public DataTable GetTbl() { DataTable tbl = new DataTable("testTbl"); for(int i=0;i<100;i++) { tbl.Columns.Add(i); tbl.Rows.Add(new string[]{"testValue"}); } return tbl; } However, as soon as I go out and hit the database to create the table, as below, I get a CommunicationException "The underlying connection was closed: The connection was closed unexpectedly." [DataContract] public DataTable GetTbl() { DataTable tbl = new DataTable("testTbl"); //Populate table with SQL query
return tbl; } The table is being populated correctly on the server side. It is significantly smaller than the test table that I looped through and returned, and the query is small and fast - there is no issue here with timeouts or large data transfer. The same exact functions and DataContracts/ServiceContracts/BehaviorContracts are being used. Why would the way that the table is being populated have any bearing on the table returning successfully? | For anyone having similar problems, I have solved my issue. It was several-fold. As Darren suggested and Paul backed up, the Max..Size properties in the configuration needed to be enlarged. The SvcTraceViewer utility helped in determining this, but it still does not always give the most helpful error messages. It also appears that when the Service Reference is updated on the client side, the configuration will sometimes not update properly (e.g. Changing config values on the server will not always properly update on the client. I had to go in and change the Max..Size properties multiple times on both the client and server sides in the course of my debugging) For a DataTable to be serializable, it needs to be given a name. The default constructor does not give the table a name, so: return new DataTable(); will not be serializable, while: return new DataTable("someName"); will name the table whatever is passed as the parameter. Note that a table can be given a name at any time by assigning a string to the TableName property of the DataTable. var table = new DataTable(); table.TableName = "someName"; Hopefully that will help someone. | Returning DataTables in WCF/.NET I have a WCF service from which I want to return a DataTable. I know that this is often a highly-debated topic, as far as whether or not returning DataTables is a good practice. Let's put that aside for a moment. When I create a DataTable from scratch, as below, there are no problems whatsoever. The table is created, populated, and returned to the client, and all is well: [DataContract] public DataTable GetTbl() { DataTable tbl = new DataTable("testTbl"); for(int i=0;i<100;i++) { tbl.Columns.Add(i); tbl.Rows.Add(new string[]{"testValue"}); } return tbl; } However, as soon as I go out and hit the database to create the table, as below, I get a CommunicationException "The underlying connection was closed: The connection was closed unexpectedly." [DataContract] public DataTable GetTbl() { DataTable tbl = new DataTable("testTbl"); //Populate table with SQL query
return tbl; } The table is being populated correctly on the server side. It is significantly smaller than the test table that I looped through and returned, and the query is small and fast - there is no issue here with timeouts or large data transfer. The same exact functions and DataContracts/ServiceContracts/BehaviorContracts are being used. Why would the way that the table is being populated have any bearing on the table returning successfully? | TITLE:
Returning DataTables in WCF/.NET
QUESTION:
I have a WCF service from which I want to return a DataTable. I know that this is often a highly-debated topic, as far as whether or not returning DataTables is a good practice. Let's put that aside for a moment. When I create a DataTable from scratch, as below, there are no problems whatsoever. The table is created, populated, and returned to the client, and all is well: [DataContract] public DataTable GetTbl() { DataTable tbl = new DataTable("testTbl"); for(int i=0;i<100;i++) { tbl.Columns.Add(i); tbl.Rows.Add(new string[]{"testValue"}); } return tbl; } However, as soon as I go out and hit the database to create the table, as below, I get a CommunicationException "The underlying connection was closed: The connection was closed unexpectedly." [DataContract] public DataTable GetTbl() { DataTable tbl = new DataTable("testTbl"); //Populate table with SQL query
return tbl; } The table is being populated correctly on the server side. It is significantly smaller than the test table that I looped through and returned, and the query is small and fast - there is no issue here with timeouts or large data transfer. The same exact functions and DataContracts/ServiceContracts/BehaviorContracts are being used. Why would the way that the table is being populated have any bearing on the table returning successfully?
ANSWER:
For anyone having similar problems, I have solved my issue. It was several-fold. As Darren suggested and Paul backed up, the Max..Size properties in the configuration needed to be enlarged. The SvcTraceViewer utility helped in determining this, but it still does not always give the most helpful error messages. It also appears that when the Service Reference is updated on the client side, the configuration will sometimes not update properly (e.g. Changing config values on the server will not always properly update on the client. I had to go in and change the Max..Size properties multiple times on both the client and server sides in the course of my debugging) For a DataTable to be serializable, it needs to be given a name. The default constructor does not give the table a name, so: return new DataTable(); will not be serializable, while: return new DataTable("someName"); will name the table whatever is passed as the parameter. Note that a table can be given a name at any time by assigning a string to the TableName property of the DataTable. var table = new DataTable(); table.TableName = "someName"; Hopefully that will help someone. | [
"c#",
".net",
"wcf",
"web-services",
"datatable"
] | 51 | 85 | 57,442 | 8 | 0 | 2008-08-15T20:26:01.447000 | 2008-09-03T19:14:32.540000 |
12,706 | 12,719 | Best way to custom edit records in ASP.NET? | I'm coming from a Rails background and doing some work on a ASP.NET project (not ASP MVC). Newbie question: what's the easiest way to make a custom editor for a table of records? For example: I have a bunch of data rows and want to change the "category" field on each -- maybe a dropdown, maybe a link, maybe the user types it in. In Rails, I'd iterate over the rows to build a table, and would have a form for each row. The form would have an input box or dropdown, and submit the data to a controller like "/item/edit/15?category=foo" where 15 was the itemID and the new category was "foo". I'm new to the ASP.NET model and am not sure of the "right" way to do this -- just the simplest way to get back the new data & save it off. Would I make a custom control and append it to each row? Any help appreciated. | You can REALLY cheat nowadays and take a peek at the new Dynamic Data that comes with.NET 3.5 SP1. Scott Guthrie has a blog entry demoing on how quick and easy it'll flow for you here: http://weblogs.asp.net/scottgu/archive/2007/12/14/new-asp-net-dynamic-data-support.aspx Without getting THAT cutting edge, I'd use the XSD generator to generate a strongly typed DataSet that coincides with the table in question. This will also generate the TableAdapter you can use to do all your CRUD statements. From there, bind it to a DataGrid and leverage all the standard templates/events involved with that, such as EditIndex, SelectedIndex, RowEditing, RowUpdated, etc. I've been doing this since the early 1.0 days of.NET and this kind of functionality has only gotten more and more streamlined with every update of the Framework. EDIT: I want to give a quick nod to the Matt Berseth blog as well. I've been following a lot of his stuff for a while now and it is great! | Best way to custom edit records in ASP.NET? I'm coming from a Rails background and doing some work on a ASP.NET project (not ASP MVC). Newbie question: what's the easiest way to make a custom editor for a table of records? For example: I have a bunch of data rows and want to change the "category" field on each -- maybe a dropdown, maybe a link, maybe the user types it in. In Rails, I'd iterate over the rows to build a table, and would have a form for each row. The form would have an input box or dropdown, and submit the data to a controller like "/item/edit/15?category=foo" where 15 was the itemID and the new category was "foo". I'm new to the ASP.NET model and am not sure of the "right" way to do this -- just the simplest way to get back the new data & save it off. Would I make a custom control and append it to each row? Any help appreciated. | TITLE:
Best way to custom edit records in ASP.NET?
QUESTION:
I'm coming from a Rails background and doing some work on a ASP.NET project (not ASP MVC). Newbie question: what's the easiest way to make a custom editor for a table of records? For example: I have a bunch of data rows and want to change the "category" field on each -- maybe a dropdown, maybe a link, maybe the user types it in. In Rails, I'd iterate over the rows to build a table, and would have a form for each row. The form would have an input box or dropdown, and submit the data to a controller like "/item/edit/15?category=foo" where 15 was the itemID and the new category was "foo". I'm new to the ASP.NET model and am not sure of the "right" way to do this -- just the simplest way to get back the new data & save it off. Would I make a custom control and append it to each row? Any help appreciated.
ANSWER:
You can REALLY cheat nowadays and take a peek at the new Dynamic Data that comes with.NET 3.5 SP1. Scott Guthrie has a blog entry demoing on how quick and easy it'll flow for you here: http://weblogs.asp.net/scottgu/archive/2007/12/14/new-asp-net-dynamic-data-support.aspx Without getting THAT cutting edge, I'd use the XSD generator to generate a strongly typed DataSet that coincides with the table in question. This will also generate the TableAdapter you can use to do all your CRUD statements. From there, bind it to a DataGrid and leverage all the standard templates/events involved with that, such as EditIndex, SelectedIndex, RowEditing, RowUpdated, etc. I've been doing this since the early 1.0 days of.NET and this kind of functionality has only gotten more and more streamlined with every update of the Framework. EDIT: I want to give a quick nod to the Matt Berseth blog as well. I've been following a lot of his stuff for a while now and it is great! | [
"asp.net"
] | 2 | 2 | 1,805 | 3 | 0 | 2008-08-15T20:30:37.663000 | 2008-08-15T20:42:30.230000 |
12,709 | 16,269 | Load an XmlNodeList into an XmlDocument without looping? | I originally asked this question on RefactorMyCode, but got no responses there... Basically I'm just try to load an XmlNodeList into an XmlDocument and I was wondering if there's a more efficient method than looping. Private Function GetPreviousMonthsXml(ByVal months As Integer, ByVal startDate As Date, ByVal xDoc As XmlDocument, ByVal path As String, ByVal nodeName As String) As XmlDocument '' build xpath string with list of months to return Dim xp As New StringBuilder("//") xp.Append(nodeName) xp.Append("[") For i As Integer = 0 To (months - 1) '' get year and month portion of date for datestring xp.Append("starts-with(@Id, '") xp.Append(startDate.AddMonths(-i).ToString("yyyy-MM")) If i < (months - 1) Then xp.Append("') or ") Else xp.Append("')]") End If Next
'' *** This is the block that needs to be refactored *** '' import nodelist into an xmldocument Dim xnl As XmlNodeList = xDoc.SelectNodes(xp.ToString()) Dim returnXDoc As New XmlDocument(xDoc.NameTable) returnXDoc = xDoc.Clone() Dim nodeParents As XmlNodeList = returnXDoc.SelectNodes(path) For Each nodeParent As XmlNode In nodeParents For Each nodeToDelete As XmlNode In nodeParent.SelectNodes(nodeName) nodeParent.RemoveChild(nodeToDelete) Next Next
For Each node As XmlNode In xnl Dim newNode As XmlNode = returnXDoc.ImportNode(node, True) returnXDoc.DocumentElement.SelectSingleNode("//" & node.ParentNode.Name & "[@Id='" & newNode.Attributes("Id").Value.Split("-")(0) & "']").AppendChild(newNode) Next
'' *** end *** Return returnXDoc End Function | Dim returnXDoc As New XmlDocument(xDoc.NameTable) returnXDoc = xDoc.Clone() The first line here is redundant - you are creating an instance of an XmlDocument, then reassigning the variable: Dim returnXDoc As XmlDocument = xDoc.Clone() This does the same. Seeing as you appear to be inserting each XmlNode from your node list into a different place in the new XmlDocument then I can't see how you could possibly do this any other way. There may be faster XPath expressions you could write, for example pre-pending an XPath expression with "//" is almost always the slowest way to do something, especially if your XML is well structured. You haven't shown your XML so I couldn't really comment on this further however. | Load an XmlNodeList into an XmlDocument without looping? I originally asked this question on RefactorMyCode, but got no responses there... Basically I'm just try to load an XmlNodeList into an XmlDocument and I was wondering if there's a more efficient method than looping. Private Function GetPreviousMonthsXml(ByVal months As Integer, ByVal startDate As Date, ByVal xDoc As XmlDocument, ByVal path As String, ByVal nodeName As String) As XmlDocument '' build xpath string with list of months to return Dim xp As New StringBuilder("//") xp.Append(nodeName) xp.Append("[") For i As Integer = 0 To (months - 1) '' get year and month portion of date for datestring xp.Append("starts-with(@Id, '") xp.Append(startDate.AddMonths(-i).ToString("yyyy-MM")) If i < (months - 1) Then xp.Append("') or ") Else xp.Append("')]") End If Next
'' *** This is the block that needs to be refactored *** '' import nodelist into an xmldocument Dim xnl As XmlNodeList = xDoc.SelectNodes(xp.ToString()) Dim returnXDoc As New XmlDocument(xDoc.NameTable) returnXDoc = xDoc.Clone() Dim nodeParents As XmlNodeList = returnXDoc.SelectNodes(path) For Each nodeParent As XmlNode In nodeParents For Each nodeToDelete As XmlNode In nodeParent.SelectNodes(nodeName) nodeParent.RemoveChild(nodeToDelete) Next Next
For Each node As XmlNode In xnl Dim newNode As XmlNode = returnXDoc.ImportNode(node, True) returnXDoc.DocumentElement.SelectSingleNode("//" & node.ParentNode.Name & "[@Id='" & newNode.Attributes("Id").Value.Split("-")(0) & "']").AppendChild(newNode) Next
'' *** end *** Return returnXDoc End Function | TITLE:
Load an XmlNodeList into an XmlDocument without looping?
QUESTION:
I originally asked this question on RefactorMyCode, but got no responses there... Basically I'm just try to load an XmlNodeList into an XmlDocument and I was wondering if there's a more efficient method than looping. Private Function GetPreviousMonthsXml(ByVal months As Integer, ByVal startDate As Date, ByVal xDoc As XmlDocument, ByVal path As String, ByVal nodeName As String) As XmlDocument '' build xpath string with list of months to return Dim xp As New StringBuilder("//") xp.Append(nodeName) xp.Append("[") For i As Integer = 0 To (months - 1) '' get year and month portion of date for datestring xp.Append("starts-with(@Id, '") xp.Append(startDate.AddMonths(-i).ToString("yyyy-MM")) If i < (months - 1) Then xp.Append("') or ") Else xp.Append("')]") End If Next
'' *** This is the block that needs to be refactored *** '' import nodelist into an xmldocument Dim xnl As XmlNodeList = xDoc.SelectNodes(xp.ToString()) Dim returnXDoc As New XmlDocument(xDoc.NameTable) returnXDoc = xDoc.Clone() Dim nodeParents As XmlNodeList = returnXDoc.SelectNodes(path) For Each nodeParent As XmlNode In nodeParents For Each nodeToDelete As XmlNode In nodeParent.SelectNodes(nodeName) nodeParent.RemoveChild(nodeToDelete) Next Next
For Each node As XmlNode In xnl Dim newNode As XmlNode = returnXDoc.ImportNode(node, True) returnXDoc.DocumentElement.SelectSingleNode("//" & node.ParentNode.Name & "[@Id='" & newNode.Attributes("Id").Value.Split("-")(0) & "']").AppendChild(newNode) Next
'' *** end *** Return returnXDoc End Function
ANSWER:
Dim returnXDoc As New XmlDocument(xDoc.NameTable) returnXDoc = xDoc.Clone() The first line here is redundant - you are creating an instance of an XmlDocument, then reassigning the variable: Dim returnXDoc As XmlDocument = xDoc.Clone() This does the same. Seeing as you appear to be inserting each XmlNode from your node list into a different place in the new XmlDocument then I can't see how you could possibly do this any other way. There may be faster XPath expressions you could write, for example pre-pending an XPath expression with "//" is almost always the slowest way to do something, especially if your XML is well structured. You haven't shown your XML so I couldn't really comment on this further however. | [
"xml",
"vb.net",
"xmldocument",
"xmlnode",
"xmlnodelist"
] | 4 | 2 | 11,543 | 1 | 0 | 2008-08-15T20:33:33.833000 | 2008-08-19T14:28:49.483000 |
12,716 | 12,751 | Problems with #import of .NET out-of-proc server | In C++ program, I am trying to #import TLB of.NET out-of-proc server. I get errors like: z:\server.tlh(111): error C2146: syntax error: missing ';' before identifier 'GetType' z:\server.tlh(111): error C2501: '_TypePtr': missing storage-class or type specifiers z:\server.tli(74): error C2143: syntax error: missing ';' before 'tag::id' z:\server.tli(74): error C2433: '_TypePtr': 'inline' not permitted on data declarations z:\server.tli(74): error C2501: '_TypePtr': missing storage-class or type specifiers z:\server.tli(74): fatal error C1004: unexpected end of file found The TLH looks like: _bstr_t GetToString(); VARIANT_BOOL Equals (const _variant_t & obj); long GetHashCode(); _TypePtr GetType(); long Open(); I am not really interested in the having the base object.NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import mscorlib.tlb (or put it in path), but I can't get that to compile either. Any tips? | Added no_namespace and raw_interfaces_only to my #import: #import "server.tlb" no_namespace named_guids Also using TLBEXP.EXE instead of REGASM.EXE seems to help this issue. | Problems with #import of .NET out-of-proc server In C++ program, I am trying to #import TLB of.NET out-of-proc server. I get errors like: z:\server.tlh(111): error C2146: syntax error: missing ';' before identifier 'GetType' z:\server.tlh(111): error C2501: '_TypePtr': missing storage-class or type specifiers z:\server.tli(74): error C2143: syntax error: missing ';' before 'tag::id' z:\server.tli(74): error C2433: '_TypePtr': 'inline' not permitted on data declarations z:\server.tli(74): error C2501: '_TypePtr': missing storage-class or type specifiers z:\server.tli(74): fatal error C1004: unexpected end of file found The TLH looks like: _bstr_t GetToString(); VARIANT_BOOL Equals (const _variant_t & obj); long GetHashCode(); _TypePtr GetType(); long Open(); I am not really interested in the having the base object.NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import mscorlib.tlb (or put it in path), but I can't get that to compile either. Any tips? | TITLE:
Problems with #import of .NET out-of-proc server
QUESTION:
In C++ program, I am trying to #import TLB of.NET out-of-proc server. I get errors like: z:\server.tlh(111): error C2146: syntax error: missing ';' before identifier 'GetType' z:\server.tlh(111): error C2501: '_TypePtr': missing storage-class or type specifiers z:\server.tli(74): error C2143: syntax error: missing ';' before 'tag::id' z:\server.tli(74): error C2433: '_TypePtr': 'inline' not permitted on data declarations z:\server.tli(74): error C2501: '_TypePtr': missing storage-class or type specifiers z:\server.tli(74): fatal error C1004: unexpected end of file found The TLH looks like: _bstr_t GetToString(); VARIANT_BOOL Equals (const _variant_t & obj); long GetHashCode(); _TypePtr GetType(); long Open(); I am not really interested in the having the base object.NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import mscorlib.tlb (or put it in path), but I can't get that to compile either. Any tips?
ANSWER:
Added no_namespace and raw_interfaces_only to my #import: #import "server.tlb" no_namespace named_guids Also using TLBEXP.EXE instead of REGASM.EXE seems to help this issue. | [
"c#",
"c++",
"com",
"interop"
] | 4 | 1 | 4,685 | 5 | 0 | 2008-08-15T20:40:11.980000 | 2008-08-15T21:06:51.660000 |
12,718 | 12,744 | Setting up Subversion on Windows as a service | When installing subversion as a service, I used this command: c:\>svnservice -install --daemon --root "c:\documents and settings\my_repository" And then I got this error: Could not create service in service control manager. After looking at some MSDN docs on the service control manager, I tried granting full control to everyone in the permissions on the registry key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services, but that hasn't had any effect. Anybody know what I did wrong, or how to overcome this? Note #1: I am running as an administrator on this box *Note #2: I was following the instructions given here, so maybe my choice of directory is misguided. And my repository is not actually called "my_repository". I used the name of an actual project which is currently under source control in gasp VSS.* | VisualSVN Server installs as a Windows service. It is free, includes Apache, OpenSSL, and a repository / permission management tool. It can also integrate with Active Directory for user authentication. I highly recommend it for hosting SVN on Windows. | Setting up Subversion on Windows as a service When installing subversion as a service, I used this command: c:\>svnservice -install --daemon --root "c:\documents and settings\my_repository" And then I got this error: Could not create service in service control manager. After looking at some MSDN docs on the service control manager, I tried granting full control to everyone in the permissions on the registry key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services, but that hasn't had any effect. Anybody know what I did wrong, or how to overcome this? Note #1: I am running as an administrator on this box *Note #2: I was following the instructions given here, so maybe my choice of directory is misguided. And my repository is not actually called "my_repository". I used the name of an actual project which is currently under source control in gasp VSS.* | TITLE:
Setting up Subversion on Windows as a service
QUESTION:
When installing subversion as a service, I used this command: c:\>svnservice -install --daemon --root "c:\documents and settings\my_repository" And then I got this error: Could not create service in service control manager. After looking at some MSDN docs on the service control manager, I tried granting full control to everyone in the permissions on the registry key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services, but that hasn't had any effect. Anybody know what I did wrong, or how to overcome this? Note #1: I am running as an administrator on this box *Note #2: I was following the instructions given here, so maybe my choice of directory is misguided. And my repository is not actually called "my_repository". I used the name of an actual project which is currently under source control in gasp VSS.*
ANSWER:
VisualSVN Server installs as a Windows service. It is free, includes Apache, OpenSSL, and a repository / permission management tool. It can also integrate with Active Directory for user authentication. I highly recommend it for hosting SVN on Windows. | [
"svn",
"version-control"
] | 1 | 6 | 3,774 | 7 | 0 | 2008-08-15T20:41:54.940000 | 2008-08-15T21:04:08.827000 |
12,720 | 12,728 | Add .NET 2.0 SP1 as a prerequisite for deployment project | I have a.NET 2.0 application that has recently had contributions that are Service Pack 1 dependent. The deployment project has detected.NET 2.0 as a prerequisite, but NOT SP1. How do I include SP1 as a dependency/prerequisite in my deployment project? | You'll want to setup launch condition in your deployment project to make sure version 2.0 SP1 is installed. You'll want to set a requirement based off the MsiNetAssemblySupport variable, tied to the version number of.NET 2.0 SP1 (2.0.50727.1433 and above according to this page.) Bootstrapping the project to actually download the framework if it isn't installed is a different matter, and there are plenty of articles out there on how to do that. | Add .NET 2.0 SP1 as a prerequisite for deployment project I have a.NET 2.0 application that has recently had contributions that are Service Pack 1 dependent. The deployment project has detected.NET 2.0 as a prerequisite, but NOT SP1. How do I include SP1 as a dependency/prerequisite in my deployment project? | TITLE:
Add .NET 2.0 SP1 as a prerequisite for deployment project
QUESTION:
I have a.NET 2.0 application that has recently had contributions that are Service Pack 1 dependent. The deployment project has detected.NET 2.0 as a prerequisite, but NOT SP1. How do I include SP1 as a dependency/prerequisite in my deployment project?
ANSWER:
You'll want to setup launch condition in your deployment project to make sure version 2.0 SP1 is installed. You'll want to set a requirement based off the MsiNetAssemblySupport variable, tied to the version number of.NET 2.0 SP1 (2.0.50727.1433 and above according to this page.) Bootstrapping the project to actually download the framework if it isn't installed is a different matter, and there are plenty of articles out there on how to do that. | [
".net-2.0",
"installation",
"dependencies"
] | 2 | 3 | 938 | 1 | 0 | 2008-08-15T20:42:51.523000 | 2008-08-15T20:51:14.580000 |
12,765 | 12,769 | Strange characters in PHP | This is driving me crazy. I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became it outputs Hello if I create a new file and copy / paste the same script to it it works! Why does this one file give me the strange characters all the time? | That's the BOM (Byte Order Mark) you are seeing. In your editor, there should be a way to force saving without BOM which will remove the problem. | Strange characters in PHP This is driving me crazy. I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became it outputs Hello if I create a new file and copy / paste the same script to it it works! Why does this one file give me the strange characters all the time? | TITLE:
Strange characters in PHP
QUESTION:
This is driving me crazy. I have this one php file on a test server at work which does not work.. I kept deleting stuff from it till it became it outputs Hello if I create a new file and copy / paste the same script to it it works! Why does this one file give me the strange characters all the time?
ANSWER:
That's the BOM (Byte Order Mark) you are seeing. In your editor, there should be a way to force saving without BOM which will remove the problem. | [
"php",
"encoding"
] | 10 | 16 | 1,839 | 3 | 0 | 2008-08-15T21:18:30.843000 | 2008-08-15T21:20:49.870000 |
12,768 | 12,789 | How is the HTML on this site so clean? | I work with C# at work but dislike how with webforms it spews out a lot of JavaScript not including the many lines for viewstate that it creates. That's why I like coding with PHP as I have full control. But I was just wondering how this sites HTML is so clean and elegant? Does using MVC have something to do with it? I see that JQuery is used but surely you still use asp:required validators? If you do, where is all the hideous code that it normally produces? And if they arent using required field validators, why not? Surely it's quicker to develop in than using JQuery? One of the main reasons I code my personal sites in PHP was due to the more elegant HTML that it produces but if I can produce code like this site then I will go full time.net! | One of the goals of ASP.NET MVC is to give you control of your markup. However, there have always been choices with ASP.NET which would allow you to generate relatively clean HTML. For instance, ASP.NET has always offered a choice with validator controls. Do you value development speed over markup? Use validators. Value markup over development speed? Pick another validation mechanism. Your comments on validators are kind of contradictory there - it's possible to use ASP.NET and still make choices for markup purity over development speed. Also, with webforms, we've had the CSS Friendly Control Adapters for a few years which will modify the controls to render more semantic markup. ASP.NET 3.5 included the ListView, which makes it really easy to write repeater type controls which emit semantic HTML. We used ASP.NET webforms on the Microsoft PDC site and have kept the HTML pretty clean: http://microsoftpdc.com/Agenda/Speakers.aspx - the Viewstate could probably be disabled on most pages, although in reality it's only a few dozen bytes. | How is the HTML on this site so clean? I work with C# at work but dislike how with webforms it spews out a lot of JavaScript not including the many lines for viewstate that it creates. That's why I like coding with PHP as I have full control. But I was just wondering how this sites HTML is so clean and elegant? Does using MVC have something to do with it? I see that JQuery is used but surely you still use asp:required validators? If you do, where is all the hideous code that it normally produces? And if they arent using required field validators, why not? Surely it's quicker to develop in than using JQuery? One of the main reasons I code my personal sites in PHP was due to the more elegant HTML that it produces but if I can produce code like this site then I will go full time.net! | TITLE:
How is the HTML on this site so clean?
QUESTION:
I work with C# at work but dislike how with webforms it spews out a lot of JavaScript not including the many lines for viewstate that it creates. That's why I like coding with PHP as I have full control. But I was just wondering how this sites HTML is so clean and elegant? Does using MVC have something to do with it? I see that JQuery is used but surely you still use asp:required validators? If you do, where is all the hideous code that it normally produces? And if they arent using required field validators, why not? Surely it's quicker to develop in than using JQuery? One of the main reasons I code my personal sites in PHP was due to the more elegant HTML that it produces but if I can produce code like this site then I will go full time.net!
ANSWER:
One of the goals of ASP.NET MVC is to give you control of your markup. However, there have always been choices with ASP.NET which would allow you to generate relatively clean HTML. For instance, ASP.NET has always offered a choice with validator controls. Do you value development speed over markup? Use validators. Value markup over development speed? Pick another validation mechanism. Your comments on validators are kind of contradictory there - it's possible to use ASP.NET and still make choices for markup purity over development speed. Also, with webforms, we've had the CSS Friendly Control Adapters for a few years which will modify the controls to render more semantic markup. ASP.NET 3.5 included the ListView, which makes it really easy to write repeater type controls which emit semantic HTML. We used ASP.NET webforms on the Microsoft PDC site and have kept the HTML pretty clean: http://microsoftpdc.com/Agenda/Speakers.aspx - the Viewstate could probably be disabled on most pages, although in reality it's only a few dozen bytes. | [
"html",
"semantic-markup"
] | 10 | 9 | 1,011 | 4 | 0 | 2008-08-15T21:19:48.437000 | 2008-08-15T21:30:36.043000 |
12,774 | 12,778 | Attaching VisualSVN Server to an existing repository | All the recent VisualSVN Server posts made me want to check it out. I have SVN running right now through Apache, but I'd like to try out VisualSVN Server, mostly for the Active Directory integration. Their docs don't describe whether you can easily migrate an existing repository. Anyone done this before? | VisualSVN Server will use your existing SVN repositories with no problems. I have successfully migrated repositories from SVN + Apache to VisualSVN Server on multiple occasions. | Attaching VisualSVN Server to an existing repository All the recent VisualSVN Server posts made me want to check it out. I have SVN running right now through Apache, but I'd like to try out VisualSVN Server, mostly for the Active Directory integration. Their docs don't describe whether you can easily migrate an existing repository. Anyone done this before? | TITLE:
Attaching VisualSVN Server to an existing repository
QUESTION:
All the recent VisualSVN Server posts made me want to check it out. I have SVN running right now through Apache, but I'd like to try out VisualSVN Server, mostly for the Active Directory integration. Their docs don't describe whether you can easily migrate an existing repository. Anyone done this before?
ANSWER:
VisualSVN Server will use your existing SVN repositories with no problems. I have successfully migrated repositories from SVN + Apache to VisualSVN Server on multiple occasions. | [
"svn",
"version-control",
"visualsvn-server",
"svn-repository"
] | 3 | 4 | 3,825 | 4 | 0 | 2008-08-15T21:22:36.340000 | 2008-08-15T21:23:40.957000 |
12,794 | 12,983 | How do I add a pre tag inside a code tag with jQuery? | I'm trying to use jQuery to format code blocks, specifically to add a tag inside the tag: $(document).ready(function() { $("code").wrapInner(" "); }); Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert alert($("code").html()); I see that IE has inserted some additional text into the pre tag: If I reload the page, the number following jQuery changes. If I use wrap() instead of wrapInner(), to wrap the outside the tag, both IE and Firefox handle it correctly. But shouldn't work inside as well? I'd prefer to use wrapInner() because I can then add a CSS class to the tag to handle all formatting, but if I use wrap(), I have to put page formatting CSS in the tag and text/font formatting in the tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible. Has anyone else encountered this? Am I missing something? | That's the difference between block and inline elements. pre is a block level element. It's not legal to put it inside a code tag, which can only contain inline content. Because browsers have to support whatever godawful tag soup they might find on the real web, Firefox tries to do what you mean. IE happens to handle it differently, which is fine by the spec; behavior in that case is unspecified, because it should never happen. Could you instead replace the code element with the pre? (Because of the block/inline issue, technically that should only work if the elements are inside an element with "flow" content, but the browsers might do what you want anyway.) Why is it a code element in the first place, if you want pre 's behavior? You could also give the code element pre 's whitespace preserving power with the CSS white-space: pre, but apparently IE 6 only honors that in Strict Mode. | How do I add a pre tag inside a code tag with jQuery? I'm trying to use jQuery to format code blocks, specifically to add a tag inside the tag: $(document).ready(function() { $("code").wrapInner(" "); }); Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert alert($("code").html()); I see that IE has inserted some additional text into the pre tag: If I reload the page, the number following jQuery changes. If I use wrap() instead of wrapInner(), to wrap the outside the tag, both IE and Firefox handle it correctly. But shouldn't work inside as well? I'd prefer to use wrapInner() because I can then add a CSS class to the tag to handle all formatting, but if I use wrap(), I have to put page formatting CSS in the tag and text/font formatting in the tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible. Has anyone else encountered this? Am I missing something? | TITLE:
How do I add a pre tag inside a code tag with jQuery?
QUESTION:
I'm trying to use jQuery to format code blocks, specifically to add a tag inside the tag: $(document).ready(function() { $("code").wrapInner(" "); }); Firefox applies the formatting correctly, but IE puts the entire code block on one line. If I add an alert alert($("code").html()); I see that IE has inserted some additional text into the pre tag: If I reload the page, the number following jQuery changes. If I use wrap() instead of wrapInner(), to wrap the outside the tag, both IE and Firefox handle it correctly. But shouldn't work inside as well? I'd prefer to use wrapInner() because I can then add a CSS class to the tag to handle all formatting, but if I use wrap(), I have to put page formatting CSS in the tag and text/font formatting in the tag, or Firefox and IE both choke. Not a huge deal, but I'd like to keep it as simple as possible. Has anyone else encountered this? Am I missing something?
ANSWER:
That's the difference between block and inline elements. pre is a block level element. It's not legal to put it inside a code tag, which can only contain inline content. Because browsers have to support whatever godawful tag soup they might find on the real web, Firefox tries to do what you mean. IE happens to handle it differently, which is fine by the spec; behavior in that case is unspecified, because it should never happen. Could you instead replace the code element with the pre? (Because of the block/inline issue, technically that should only work if the elements are inside an element with "flow" content, but the browsers might do what you want anyway.) Why is it a code element in the first place, if you want pre 's behavior? You could also give the code element pre 's whitespace preserving power with the CSS white-space: pre, but apparently IE 6 only honors that in Strict Mode. | [
"javascript",
"jquery",
"html",
"internet-explorer"
] | 13 | 11 | 7,946 | 5 | 0 | 2008-08-15T21:35:51.260000 | 2008-08-16T04:04:09.713000 |
12,807 | 14,604 | How to register COM from VS Setup project? | I have marked my DLL as vsdraCOM, and I can see it in the registry after installing, but my application does not see the COM interface until I call RegAsm on it manually. Why could this be? The COM registration does not work on Vista (confirmed myself) and on XP (confirmed by a colleague). Using Visual Studio 2005 on XP. | Well, I have found a solution: Run RegAsm.exe with the /regfile option to generate the registry entries. Manually import the.reg file into the VS Setup project by viewing the registry, right clicking, and choosing "Import..." | How to register COM from VS Setup project? I have marked my DLL as vsdraCOM, and I can see it in the registry after installing, but my application does not see the COM interface until I call RegAsm on it manually. Why could this be? The COM registration does not work on Vista (confirmed myself) and on XP (confirmed by a colleague). Using Visual Studio 2005 on XP. | TITLE:
How to register COM from VS Setup project?
QUESTION:
I have marked my DLL as vsdraCOM, and I can see it in the registry after installing, but my application does not see the COM interface until I call RegAsm on it manually. Why could this be? The COM registration does not work on Vista (confirmed myself) and on XP (confirmed by a colleague). Using Visual Studio 2005 on XP.
ANSWER:
Well, I have found a solution: Run RegAsm.exe with the /regfile option to generate the registry entries. Manually import the.reg file into the VS Setup project by viewing the registry, right clicking, and choosing "Import..." | [
"com",
"registry"
] | 3 | 5 | 4,131 | 1 | 0 | 2008-08-15T21:51:29.480000 | 2008-08-18T13:37:12.923000 |
12,823 | 12,875 | Filter by zip code, or other location based data retrieval strategies | My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels: should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations. should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code. I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance? this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here. | Getting a Zip Code database is no problem. You can try this free one: http://zips.sourceforge.net/ Although I don't know how current it is, or you can use one of many providers. We have an annual subscription to ZipCodeDownload.com, and for maybe $100 we get monthly updates with the latest Zip Code data complete with Lat/Longs of the centroid of the zip code. As for querying for all zips within a certain radius, you are going to need a spatial library of some sort. If you just have a table of zips with lats/longs, you will need a database-oriented mechanism. SQL Server 2008 has the capability built in, and there are open source libraries and commercial libraries that will add such capabilities to SQL Server 2005. The open source database PostgreSQL has a project, PostGIS that adds this capability to that database. It is here: http://postgis.refractions.net/ Other database platforms probably have similar projects, but those are the ones I am aware of. With one of these DB based libraries you should be able to directly query for any zip codes (or any rows of any kind that have lat/long columns) within a given radius. If you want to go a different route you can use spatial tools with a mapping library. There are open source options here as well, such as SharpMap and many others ( Google can help out ) that can use the free Tiger maps for the united states as the data source. However, this route is somewhat more complicated and possibly less performant if all you need is a radius search. Finally, you may want to look into a web service. This, as you say, is a common need, and I imagine there are any number ob web services that you can subscribe to that can provide all zip codes in a given radius from a provided zip code. A quick Google search turned up this: http://www.zip-codes.com/free-zip-code-tools.asp#radius But there are MANY resources to be had for the searching on this subject. | Filter by zip code, or other location based data retrieval strategies My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels: should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations. should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code. I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance? this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here. | TITLE:
Filter by zip code, or other location based data retrieval strategies
QUESTION:
My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels: should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations. should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code. I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance? this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here.
ANSWER:
Getting a Zip Code database is no problem. You can try this free one: http://zips.sourceforge.net/ Although I don't know how current it is, or you can use one of many providers. We have an annual subscription to ZipCodeDownload.com, and for maybe $100 we get monthly updates with the latest Zip Code data complete with Lat/Longs of the centroid of the zip code. As for querying for all zips within a certain radius, you are going to need a spatial library of some sort. If you just have a table of zips with lats/longs, you will need a database-oriented mechanism. SQL Server 2008 has the capability built in, and there are open source libraries and commercial libraries that will add such capabilities to SQL Server 2005. The open source database PostgreSQL has a project, PostGIS that adds this capability to that database. It is here: http://postgis.refractions.net/ Other database platforms probably have similar projects, but those are the ones I am aware of. With one of these DB based libraries you should be able to directly query for any zip codes (or any rows of any kind that have lat/long columns) within a given radius. If you want to go a different route you can use spatial tools with a mapping library. There are open source options here as well, such as SharpMap and many others ( Google can help out ) that can use the free Tiger maps for the united states as the data source. However, this route is somewhat more complicated and possibly less performant if all you need is a radius search. Finally, you may want to look into a web service. This, as you say, is a common need, and I imagine there are any number ob web services that you can subscribe to that can provide all zip codes in a given radius from a provided zip code. A quick Google search turned up this: http://www.zip-codes.com/free-zip-code-tools.asp#radius But there are MANY resources to be had for the searching on this subject. | [
"location",
"zipcode"
] | 4 | 5 | 4,665 | 5 | 0 | 2008-08-15T22:09:05.343000 | 2008-08-15T23:12:26.323000 |
12,836 | 12,935 | C# Database Access: DBNull vs null | We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader. In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about. | I find it better to use null, instead of DB null. The reason is because, as you said, you're separating yourself from the DB world. It is generally good practice to check reference types to ensure they aren't null anyway. You're going to be checking for null for things other than DB data, and I find it is best to keep consistency across the system, and use null, not DBNull. In the long run, architecturally I find it to be the better solution. | C# Database Access: DBNull vs null We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader. In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about. | TITLE:
C# Database Access: DBNull vs null
QUESTION:
We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader. In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about.
ANSWER:
I find it better to use null, instead of DB null. The reason is because, as you said, you're separating yourself from the DB world. It is generally good practice to check reference types to ensure they aren't null anyway. You're going to be checking for null for things other than DB data, and I find it is best to keep consistency across the system, and use null, not DBNull. In the long run, architecturally I find it to be the better solution. | [
"c#",
"orm",
"null",
"dbnull"
] | 13 | 15 | 15,243 | 4 | 0 | 2008-08-15T22:23:21.490000 | 2008-08-16T00:55:43.510000 |
12,843 | 20,591 | How to combine two projects in Mercurial? | I have two separate mercurial repositories. At this point it makes sense that they "become one" because I want to work on the two projects simultaneously. I'd really like the two projects to each be a subdirectory in the new repository. How do I merge the two projects? Is this a good idea, or should I keep them separate? It seems I ought to be able to push from one repository to the other... Maybe this is really straight forward? | I was able to combine my two repositories in this way: Use hg clone first_repository to clone one of the repositories. Use hg pull -f other_repository to pull the code in from the other repository. The -f (force) flag on the pull is the key -- it says to ignore the fact that the two repositories are not from the same source. Here are the docs for this feature. | How to combine two projects in Mercurial? I have two separate mercurial repositories. At this point it makes sense that they "become one" because I want to work on the two projects simultaneously. I'd really like the two projects to each be a subdirectory in the new repository. How do I merge the two projects? Is this a good idea, or should I keep them separate? It seems I ought to be able to push from one repository to the other... Maybe this is really straight forward? | TITLE:
How to combine two projects in Mercurial?
QUESTION:
I have two separate mercurial repositories. At this point it makes sense that they "become one" because I want to work on the two projects simultaneously. I'd really like the two projects to each be a subdirectory in the new repository. How do I merge the two projects? Is this a good idea, or should I keep them separate? It seems I ought to be able to push from one repository to the other... Maybe this is really straight forward?
ANSWER:
I was able to combine my two repositories in this way: Use hg clone first_repository to clone one of the repositories. Use hg pull -f other_repository to pull the code in from the other repository. The -f (force) flag on the pull is the key -- it says to ignore the fact that the two repositories are not from the same source. Here are the docs for this feature. | [
"mercurial"
] | 84 | 69 | 14,905 | 3 | 0 | 2008-08-15T22:29:44.770000 | 2008-08-21T17:23:08.013000 |
12,865 | 12,879 | Mercurial stuck "waiting for lock" | Got a bluescreen in windows while cloning a mercurial repository. After reboot, I now get this message for almost all hg commands: c:\src\>hg commit waiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' interrupted! Google is no help. Any tips? | When "waiting for lock on repository", delete the repository file:.hg/wlock (or it may be in.hg/store/lock ) When deleting the lock file, you must make sure nothing else is accessing the repository. (If the lock is a string of zeros or blank, this is almost certainly true). | Mercurial stuck "waiting for lock" Got a bluescreen in windows while cloning a mercurial repository. After reboot, I now get this message for almost all hg commands: c:\src\>hg commit waiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' interrupted! Google is no help. Any tips? | TITLE:
Mercurial stuck "waiting for lock"
QUESTION:
Got a bluescreen in windows while cloning a mercurial repository. After reboot, I now get this message for almost all hg commands: c:\src\>hg commit waiting for lock on repository c:\src\McVrsServer held by '\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' interrupted! Google is no help. Any tips?
ANSWER:
When "waiting for lock on repository", delete the repository file:.hg/wlock (or it may be in.hg/store/lock ) When deleting the lock file, you must make sure nothing else is accessing the repository. (If the lock is a string of zeros or blank, this is almost certainly true). | [
"mercurial"
] | 361 | 509 | 118,965 | 11 | 0 | 2008-08-15T23:01:16.533000 | 2008-08-15T23:20:18.820000 |
12,870 | 12,878 | Arrays of Arrays in Java | This is a nasty one for me... I'm a PHP guy working in Java on a JSP project. I know how to do what I'm attempting through too much code and a complete lack of finesse. I'd prefer to do it right. Here is the situation: I'm writing a small display to show customers what days they can water their lawns based on their watering group (ABCDE) and what time of year it is. Our seasons look like this: Summer (5-1 to 8-31) Spring (3-1 to 4-30) Fall (9-1 to 10-31) Winter (11-1 to 2-28) An example might be: If I'm in group A, here would be my allowed times: Winter: Mondays only Spring: Tues, Thurs, Sat Summer: Any Day Fall: Tues, Thurs, Sat If I was writing this in PHP I would use arrays like this: //M=Monday,t=Tuesday,T=Thursday.... etc $schedule["A"]["Winter"]='M'; $schedule["A"]["Spring"]='tTS'; $schedule["A"]["Summer"]='Any'; $schedule["A"]["Fall"]='tTS'; $schedule["B"]["Winter"]='t'; I COULD make the days arrays (array("Tuesday","Thursday","Saturday")) etc, but it is not necessary for what I'm really trying to accomplish. I will also need to setup arrays to determine what season I'm in: $seasons["Summer"]["start"]=0501; $seasons["Summer"]["end"]=0801; Can anyone suggest a really cool way to do this? I will have today's date and the group letter. I will need to get out of my function a day (M) or a series of days (tTS), (Any). | You could do essentially the same code with Hashtables (or some other Map): Hashtable > schedule = new Hashtable >(); schedule.put("A", new Hashtable ()); schedule.put("B", new Hashtable ()); schedule.put("C", new Hashtable ()); schedule.put("D", new Hashtable ()); schedule.put("E", new Hashtable ());
schedule.get("A").put("Winter", "M"); schedule.get("A").put("Spring", "tTS"); // Etc... Not as elegant, but then again, Java isn't a dynamic language, and it doesn't have hashes on the language level. Note: You might be able to do a better solution, this just popped in my head as I read your question. | Arrays of Arrays in Java This is a nasty one for me... I'm a PHP guy working in Java on a JSP project. I know how to do what I'm attempting through too much code and a complete lack of finesse. I'd prefer to do it right. Here is the situation: I'm writing a small display to show customers what days they can water their lawns based on their watering group (ABCDE) and what time of year it is. Our seasons look like this: Summer (5-1 to 8-31) Spring (3-1 to 4-30) Fall (9-1 to 10-31) Winter (11-1 to 2-28) An example might be: If I'm in group A, here would be my allowed times: Winter: Mondays only Spring: Tues, Thurs, Sat Summer: Any Day Fall: Tues, Thurs, Sat If I was writing this in PHP I would use arrays like this: //M=Monday,t=Tuesday,T=Thursday.... etc $schedule["A"]["Winter"]='M'; $schedule["A"]["Spring"]='tTS'; $schedule["A"]["Summer"]='Any'; $schedule["A"]["Fall"]='tTS'; $schedule["B"]["Winter"]='t'; I COULD make the days arrays (array("Tuesday","Thursday","Saturday")) etc, but it is not necessary for what I'm really trying to accomplish. I will also need to setup arrays to determine what season I'm in: $seasons["Summer"]["start"]=0501; $seasons["Summer"]["end"]=0801; Can anyone suggest a really cool way to do this? I will have today's date and the group letter. I will need to get out of my function a day (M) or a series of days (tTS), (Any). | TITLE:
Arrays of Arrays in Java
QUESTION:
This is a nasty one for me... I'm a PHP guy working in Java on a JSP project. I know how to do what I'm attempting through too much code and a complete lack of finesse. I'd prefer to do it right. Here is the situation: I'm writing a small display to show customers what days they can water their lawns based on their watering group (ABCDE) and what time of year it is. Our seasons look like this: Summer (5-1 to 8-31) Spring (3-1 to 4-30) Fall (9-1 to 10-31) Winter (11-1 to 2-28) An example might be: If I'm in group A, here would be my allowed times: Winter: Mondays only Spring: Tues, Thurs, Sat Summer: Any Day Fall: Tues, Thurs, Sat If I was writing this in PHP I would use arrays like this: //M=Monday,t=Tuesday,T=Thursday.... etc $schedule["A"]["Winter"]='M'; $schedule["A"]["Spring"]='tTS'; $schedule["A"]["Summer"]='Any'; $schedule["A"]["Fall"]='tTS'; $schedule["B"]["Winter"]='t'; I COULD make the days arrays (array("Tuesday","Thursday","Saturday")) etc, but it is not necessary for what I'm really trying to accomplish. I will also need to setup arrays to determine what season I'm in: $seasons["Summer"]["start"]=0501; $seasons["Summer"]["end"]=0801; Can anyone suggest a really cool way to do this? I will have today's date and the group letter. I will need to get out of my function a day (M) or a series of days (tTS), (Any).
ANSWER:
You could do essentially the same code with Hashtables (or some other Map): Hashtable > schedule = new Hashtable >(); schedule.put("A", new Hashtable ()); schedule.put("B", new Hashtable ()); schedule.put("C", new Hashtable ()); schedule.put("D", new Hashtable ()); schedule.put("E", new Hashtable ());
schedule.get("A").put("Winter", "M"); schedule.get("A").put("Spring", "tTS"); // Etc... Not as elegant, but then again, Java isn't a dynamic language, and it doesn't have hashes on the language level. Note: You might be able to do a better solution, this just popped in my head as I read your question. | [
"java",
"php",
"jsp",
"tomcat"
] | 17 | 11 | 1,449 | 12 | 0 | 2008-08-15T23:07:03.400000 | 2008-08-15T23:16:52.383000 |
12,877 | 12,959 | Oracle SQL Developer not responsive when trying to view tables (or suggest an Oracle Mac client) | I just get the beach ball all day long (it's been doing nothing for hours). It's not taking CPU, not reading from disk, not using the network. I'm using Java 1.6 on Mac OS X 10.5.4. It worked once, now even restarts of the computer won't help. Activity Monitor says it's "(Not Responding)". Only thing that I can do is kill -9 that sucker. When I sample the process I see this: mach_msg_trap 16620 read 831 semaphore_wait_trap 831 An acceptable answer that doesn't fix this would include a url for a decent free Oracle client for the Mac. Edit: @Mark Harrison sadly this happens every time I start it up, it's not an old connection. I'll like to avoid running Windows on my laptop. I'm giving some plugins for my IDE a whirl, but still no solution for me. @Matthew Schinckel Navicat seems to only have a non-commercial Oracle product...I need a commercial friendly one (even if it costs money). | I get the same problem after there's been an active connection sitting idle for a while. I solve it by restarting sql developer every once in a while. I also have Toad for Oracle running on a vmware XP session, and it works great. If you don't mind the money, try that. | Oracle SQL Developer not responsive when trying to view tables (or suggest an Oracle Mac client) I just get the beach ball all day long (it's been doing nothing for hours). It's not taking CPU, not reading from disk, not using the network. I'm using Java 1.6 on Mac OS X 10.5.4. It worked once, now even restarts of the computer won't help. Activity Monitor says it's "(Not Responding)". Only thing that I can do is kill -9 that sucker. When I sample the process I see this: mach_msg_trap 16620 read 831 semaphore_wait_trap 831 An acceptable answer that doesn't fix this would include a url for a decent free Oracle client for the Mac. Edit: @Mark Harrison sadly this happens every time I start it up, it's not an old connection. I'll like to avoid running Windows on my laptop. I'm giving some plugins for my IDE a whirl, but still no solution for me. @Matthew Schinckel Navicat seems to only have a non-commercial Oracle product...I need a commercial friendly one (even if it costs money). | TITLE:
Oracle SQL Developer not responsive when trying to view tables (or suggest an Oracle Mac client)
QUESTION:
I just get the beach ball all day long (it's been doing nothing for hours). It's not taking CPU, not reading from disk, not using the network. I'm using Java 1.6 on Mac OS X 10.5.4. It worked once, now even restarts of the computer won't help. Activity Monitor says it's "(Not Responding)". Only thing that I can do is kill -9 that sucker. When I sample the process I see this: mach_msg_trap 16620 read 831 semaphore_wait_trap 831 An acceptable answer that doesn't fix this would include a url for a decent free Oracle client for the Mac. Edit: @Mark Harrison sadly this happens every time I start it up, it's not an old connection. I'll like to avoid running Windows on my laptop. I'm giving some plugins for my IDE a whirl, but still no solution for me. @Matthew Schinckel Navicat seems to only have a non-commercial Oracle product...I need a commercial friendly one (even if it costs money).
ANSWER:
I get the same problem after there's been an active connection sitting idle for a while. I solve it by restarting sql developer every once in a while. I also have Toad for Oracle running on a vmware XP session, and it works great. If you don't mind the money, try that. | [
"java",
"oracle",
"macos"
] | 4 | 2 | 10,616 | 7 | 0 | 2008-08-15T23:15:27.470000 | 2008-08-16T02:10:30.613000 |
12,890 | 12,950 | What is a good way to denormalize a mysql database? | I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines. There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format. Any ideas on an approach? Should I start with a couple of my worst queries and go from there? | I know more about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any? http://dev.mysql.com/doc/refman/5.0/en/explain.html That being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data cubes. Here's a site I found that touches on the subject: http://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D Here's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application: select a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id where a.id=1 You could create a denormalized table and populate with almost the same query: create table tbl_ab (a_id, a_name, b_address); -- (types elided) Notice the underscores match the table aliases you use insert tbl_ab select a.id, a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id -- no where clause because you want everything Then to fix your app to use the new denormalized table, switch the dots for underscores. select a_name as name, b_address as address from tbl_ab where a_id = 1; For huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have. Remember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date. [Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects. | What is a good way to denormalize a mysql database? I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines. There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format. Any ideas on an approach? Should I start with a couple of my worst queries and go from there? | TITLE:
What is a good way to denormalize a mysql database?
QUESTION:
I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines. There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format. Any ideas on an approach? Should I start with a couple of my worst queries and go from there?
ANSWER:
I know more about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any? http://dev.mysql.com/doc/refman/5.0/en/explain.html That being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data cubes. Here's a site I found that touches on the subject: http://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D Here's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application: select a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id where a.id=1 You could create a denormalized table and populate with almost the same query: create table tbl_ab (a_id, a_name, b_address); -- (types elided) Notice the underscores match the table aliases you use insert tbl_ab select a.id, a.name, b.address from tbla a join tblb b on b.fk_a_id = a.id -- no where clause because you want everything Then to fix your app to use the new denormalized table, switch the dots for underscores. select a_name as name, b_address as address from tbl_ab where a_id = 1; For huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have. Remember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date. [Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects. | [
"mysql",
"database",
"denormalization"
] | 22 | 12 | 12,231 | 8 | 0 | 2008-08-15T23:36:30.207000 | 2008-08-16T01:36:31.040000 |
12,896 | 12,965 | parsing raw email in php | I'm looking for good/working/simple to use PHP code for parsing raw email into parts. I've written a couple of brute force solutions, but every time, one small change/header/space/something comes along and my whole parser fails and the project falls apart. And before I get pointed at PEAR/PECL, I need actual code. My host has some screwy config or something, I can never seem to get the.so's to build right. If I do get the.so made, some difference in path/environment/php.ini doesn't always make it available (apache vs cron vs CLI). Oh, and one last thing, I'm parsing the raw email text, NOT POP3, and NOT IMAP. It's being piped into the PHP script via a.qmail email redirect. I'm not expecting SOF to write it for me, I'm looking for some tips/starting points on doing it "right". This is one of those "wheel" problems that I know has already been solved. | What are you hoping to end up with at the end? The body, the subject, the sender, an attachment? You should spend some time with RFC2822 to understand the format of the mail, but here's the simplest rules for well formed email: HEADERS\n \n BODY That is, the first blank line (double newline) is the separator between the HEADERS and the BODY. A HEADER looks like this: HSTRING:HTEXT HSTRING always starts at the beginning of a line and doesn't contain any white space or colons. HTEXT can contain a wide variety of text, including newlines as long as the newline char is followed by whitespace. The "BODY" is really just any data that follows the first double newline. (There are different rules if you are transmitting mail via SMTP, but processing it over a pipe you don't have to worry about that). So, in really simple, circa-1982 RFC822 terms, an email looks like this: HEADER: HEADER TEXT HEADER: MORE HEADER TEXT INCLUDING A LINE CONTINUATION HEADER: LAST HEADER
THIS IS ANY ARBITRARY DATA (FOR THE MOST PART) Most modern email is more complex than that though. Headers can be encoded for charsets or RFC2047 mime words, or a ton of other stuff I'm not thinking of right now. The bodies are really hard to roll your own code for these days to if you want them to be meaningful. Almost all email that's generated by an MUA will be MIME encoded. That might be uuencoded text, it might be html, it might be a uuencoded excel spreadsheet. I hope this helps provide a framework for understanding some of the very elemental buckets of email. If you provide more background on what you are trying to do with the data I (or someone else) might be able to provide better direction. | parsing raw email in php I'm looking for good/working/simple to use PHP code for parsing raw email into parts. I've written a couple of brute force solutions, but every time, one small change/header/space/something comes along and my whole parser fails and the project falls apart. And before I get pointed at PEAR/PECL, I need actual code. My host has some screwy config or something, I can never seem to get the.so's to build right. If I do get the.so made, some difference in path/environment/php.ini doesn't always make it available (apache vs cron vs CLI). Oh, and one last thing, I'm parsing the raw email text, NOT POP3, and NOT IMAP. It's being piped into the PHP script via a.qmail email redirect. I'm not expecting SOF to write it for me, I'm looking for some tips/starting points on doing it "right". This is one of those "wheel" problems that I know has already been solved. | TITLE:
parsing raw email in php
QUESTION:
I'm looking for good/working/simple to use PHP code for parsing raw email into parts. I've written a couple of brute force solutions, but every time, one small change/header/space/something comes along and my whole parser fails and the project falls apart. And before I get pointed at PEAR/PECL, I need actual code. My host has some screwy config or something, I can never seem to get the.so's to build right. If I do get the.so made, some difference in path/environment/php.ini doesn't always make it available (apache vs cron vs CLI). Oh, and one last thing, I'm parsing the raw email text, NOT POP3, and NOT IMAP. It's being piped into the PHP script via a.qmail email redirect. I'm not expecting SOF to write it for me, I'm looking for some tips/starting points on doing it "right". This is one of those "wheel" problems that I know has already been solved.
ANSWER:
What are you hoping to end up with at the end? The body, the subject, the sender, an attachment? You should spend some time with RFC2822 to understand the format of the mail, but here's the simplest rules for well formed email: HEADERS\n \n BODY That is, the first blank line (double newline) is the separator between the HEADERS and the BODY. A HEADER looks like this: HSTRING:HTEXT HSTRING always starts at the beginning of a line and doesn't contain any white space or colons. HTEXT can contain a wide variety of text, including newlines as long as the newline char is followed by whitespace. The "BODY" is really just any data that follows the first double newline. (There are different rules if you are transmitting mail via SMTP, but processing it over a pipe you don't have to worry about that). So, in really simple, circa-1982 RFC822 terms, an email looks like this: HEADER: HEADER TEXT HEADER: MORE HEADER TEXT INCLUDING A LINE CONTINUATION HEADER: LAST HEADER
THIS IS ANY ARBITRARY DATA (FOR THE MOST PART) Most modern email is more complex than that though. Headers can be encoded for charsets or RFC2047 mime words, or a ton of other stuff I'm not thinking of right now. The bodies are really hard to roll your own code for these days to if you want them to be meaningful. Almost all email that's generated by an MUA will be MIME encoded. That might be uuencoded text, it might be html, it might be a uuencoded excel spreadsheet. I hope this helps provide a framework for understanding some of the very elemental buckets of email. If you provide more background on what you are trying to do with the data I (or someone else) might be able to provide better direction. | [
"php",
"email"
] | 35 | 24 | 61,692 | 15 | 0 | 2008-08-15T23:50:17.527000 | 2008-08-16T02:18:11.893000 |
12,905 | 31,828 | Creating Infopath 2007 addins that manipulate the design-time form | I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive. | There is no Object Model for the InfoPath designer. I believe the closest that you can get is the exposed API for the Visual Studio hosting that InfoPath supports; but I don't believe that this will give you the programatic control of the designer that you'd like. http://msdn.microsoft.com/en-us/library/aa813327.aspx#office2007infopathVSTO_InfoPathDesignerAPIIntegratingInfoPath2007VisualStudio Sorry Kevin. | Creating Infopath 2007 addins that manipulate the design-time form I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive. | TITLE:
Creating Infopath 2007 addins that manipulate the design-time form
QUESTION:
I'm experimenting with creating an add-in for Infopath 2007. The documentation is very skimpy. What I'm trying to determine is what kind of actions an add-in can take while designing a form. Most of the discussion and samples are for when the user is filling out the form. Can I, for example, add a new field to the form in the designer? Add a new item to the schema? Move a form field on the design surface? It doesn't appear so, but I can't find anything definitive.
ANSWER:
There is no Object Model for the InfoPath designer. I believe the closest that you can get is the exposed API for the Visual Studio hosting that InfoPath supports; but I don't believe that this will give you the programatic control of the designer that you'd like. http://msdn.microsoft.com/en-us/library/aa813327.aspx#office2007infopathVSTO_InfoPathDesignerAPIIntegratingInfoPath2007VisualStudio Sorry Kevin. | [
"ms-office",
"infopath"
] | 2 | 0 | 609 | 2 | 0 | 2008-08-16T00:08:23.637000 | 2008-08-28T08:03:27.693000 |
12,906 | 12,915 | Find out complete SQL Server database size | I need to know how much space occupies all the databases inside an SQL Server 2000. I did some research but could not found any script to help me out. | Source: http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1313431,00.html Works with SQL2000,2005,2008 USE master; GO
IF OBJECT_ID('dbo.sp_SDS', 'P') IS NOT NULL DROP PROCEDURE dbo.sp_SDS; GO
CREATE PROCEDURE dbo.sp_SDS @TargetDatabase sysname = NULL, -- NULL: all dbs @Level varchar(10) = 'Database', -- or "File" @UpdateUsage bit = 0, -- default no update @Unit char(2) = 'MB' -- Megabytes, Kilobytes or Gigabytes AS
/************************************************************************************************** ** ** author: Richard Ding ** date: 4/8/2008 ** usage: list db size AND path w/o SUMmary ** test code: sp_SDS -- default behavior ** sp_SDS 'maAster' ** sp_SDS NULL, NULL, 0 ** sp_SDS NULL, 'file', 1, 'GB' ** sp_SDS 'Test_snapshot', 'Database', 1 ** sp_SDS 'Test', 'File', 0, 'kb' ** sp_SDS 'pfaids', 'Database', 0, 'gb' ** sp_SDS 'tempdb', NULL, 1, 'kb' ** **************************************************************************************************/
SET NOCOUNT ON;
IF @TargetDatabase IS NOT NULL AND DB_ID(@TargetDatabase) IS NULL BEGIN RAISERROR(15010, -1, -1, @TargetDatabase); RETURN (-1) END
IF OBJECT_ID('tempdb.dbo.##Tbl_CombinedInfo', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_CombinedInfo;
IF OBJECT_ID('tempdb.dbo.##Tbl_DbFileStats', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_DbFileStats;
IF OBJECT_ID('tempdb.dbo.##Tbl_ValidDbs', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_ValidDbs;
IF OBJECT_ID('tempdb.dbo.##Tbl_Logs', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_Logs;
CREATE TABLE dbo.##Tbl_CombinedInfo ( DatabaseName sysname NULL, [type] VARCHAR(10) NULL, LogicalName sysname NULL, T dec(10, 2) NULL, U dec(10, 2) NULL, [U(%)] dec(5, 2) NULL, F dec(10, 2) NULL, [F(%)] dec(5, 2) NULL, PhysicalName sysname NULL );
CREATE TABLE dbo.##Tbl_DbFileStats ( Id int identity, DatabaseName sysname NULL, FileId int NULL, FileGroup int NULL, TotalExtents bigint NULL, UsedExtents bigint NULL, Name sysname NULL, FileName varchar(255) NULL );
CREATE TABLE dbo.##Tbl_ValidDbs ( Id int identity, Dbname sysname NULL );
CREATE TABLE dbo.##Tbl_Logs ( DatabaseName sysname NULL, LogSize dec (10, 2) NULL, LogSpaceUsedPercent dec (5, 2) NULL, Status int NULL );
DECLARE @Ver varchar(10), @DatabaseName sysname, @Ident_last int, @String varchar(2000), @BaseString varchar(2000);
SELECT @DatabaseName = '', @Ident_last = 0, @String = '', @Ver = CASE WHEN @@VERSION LIKE '%9.0%' THEN 'SQL 2005' WHEN @@VERSION LIKE '%8.0%' THEN 'SQL 2000' WHEN @@VERSION LIKE '%10.0%' THEN 'SQL 2008' END;
SELECT @BaseString = ' SELECT DB_NAME(), ' + CASE WHEN @Ver = 'SQL 2000' THEN 'CASE WHEN status & 0x40 = 0x40 THEN ''Log'' ELSE ''Data'' END' ELSE ' CASE type WHEN 0 THEN ''Data'' WHEN 1 THEN ''Log'' WHEN 4 THEN ''Full-text'' ELSE ''reserved'' END' END + ', name, ' + CASE WHEN @Ver = 'SQL 2000' THEN 'filename' ELSE 'physical_name' END + ', size*8.0/1024.0 FROM ' + CASE WHEN @Ver = 'SQL 2000' THEN 'sysfiles' ELSE 'sys.database_files' END + ' WHERE ' + CASE WHEN @Ver = 'SQL 2000' THEN ' HAS_DBACCESS(DB_NAME()) = 1' ELSE 'state_desc = ''ONLINE''' END + '';
SELECT @String = 'INSERT INTO dbo.##Tbl_ValidDbs SELECT name FROM ' + CASE WHEN @Ver = 'SQL 2000' THEN 'master.dbo.sysdatabases' WHEN @Ver IN ('SQL 2005', 'SQL 2008') THEN 'master.sys.databases' END + ' WHERE HAS_DBACCESS(name) = 1 ORDER BY name ASC'; EXEC (@String);
INSERT INTO dbo.##Tbl_Logs EXEC ('DBCC SQLPERF (LOGSPACE) WITH NO_INFOMSGS');
-- For data part IF @TargetDatabase IS NOT NULL BEGIN SELECT @DatabaseName = @TargetDatabase; IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName,'Status') = 'ONLINE' AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY' BEGIN SELECT @String = 'USE [' + @DatabaseName + '] DBCC UPDATEUSAGE (0)'; PRINT '*** ' + @String + ' *** '; EXEC (@String); PRINT ''; END
SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;
INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName) EXEC ('USE [' + @DatabaseName + '] DBCC SHOWFILESTATS WITH NO_INFOMSGS'); EXEC ('USE [' + @DatabaseName + '] ' + @String);
UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName; END ELSE BEGIN WHILE 1 = 1 BEGIN SELECT TOP 1 @DatabaseName = Dbname FROM dbo.##Tbl_ValidDbs WHERE Dbname > @DatabaseName ORDER BY Dbname ASC; IF @@ROWCOUNT = 0 BREAK; IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName, 'Status') = 'ONLINE' AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY' BEGIN SELECT @String = 'DBCC UPDATEUSAGE (''' + @DatabaseName + ''') '; PRINT '*** ' + @String + '*** '; EXEC (@String); PRINT ''; END
SELECT @Ident_last = ISNULL(MAX(Id), 0) FROM dbo.##Tbl_DbFileStats;
SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;
EXEC ('USE [' + @DatabaseName + '] ' + @String);
INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName) EXEC ('USE [' + @DatabaseName + '] DBCC SHOWFILESTATS WITH NO_INFOMSGS');
UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName WHERE Id BETWEEN @Ident_last + 1 AND @@IDENTITY; END END
-- set used size for data files, do not change total obtained from sys.database_files as it has for log files UPDATE dbo.##Tbl_CombinedInfo SET U = s.UsedExtents*8*8/1024.0 FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_DbFileStats s ON t.LogicalName = s.Name AND s.DatabaseName = t.DatabaseName;
-- set used size and % values for log files: UPDATE dbo.##Tbl_CombinedInfo SET [U(%)] = LogSpaceUsedPercent, U = T * LogSpaceUsedPercent/100.0 FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_Logs l ON l.DatabaseName = t.DatabaseName WHERE t.type = 'Log';
UPDATE dbo.##Tbl_CombinedInfo SET F = T - U, [U(%)] = U*100.0/T;
UPDATE dbo.##Tbl_CombinedInfo SET [F(%)] = F*100.0/T;
IF UPPER(ISNULL(@Level, 'DATABASE')) = 'FILE' BEGIN IF @Unit = 'KB' UPDATE dbo.##Tbl_CombinedInfo SET T = T * 1024, U = U * 1024, F = F * 1024;
IF @Unit = 'GB' UPDATE dbo.##Tbl_CombinedInfo SET T = T / 1024, U = U / 1024, F = F / 1024;
SELECT DatabaseName AS 'Database', type AS 'Type', LogicalName, T AS 'Total', U AS 'Used', [U(%)] AS 'Used (%)', F AS 'Free', [F(%)] AS 'Free (%)', PhysicalName FROM dbo.##Tbl_CombinedInfo WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%') ORDER BY DatabaseName ASC, type ASC;
SELECT CASE WHEN @Unit = 'GB' THEN 'GB' WHEN @Unit = 'KB' THEN 'KB' ELSE 'MB' END AS 'SUM', SUM (T) AS 'TOTAL', SUM (U) AS 'USED', SUM (F) AS 'FREE' FROM dbo.##Tbl_CombinedInfo; END
IF UPPER(ISNULL(@Level, 'DATABASE')) = 'DATABASE' BEGIN DECLARE @Tbl_Final TABLE ( DatabaseName sysname NULL, TOTAL dec (10, 2), [=] char(1), used dec (10, 2), [used (%)] dec (5, 2), [+] char(1), free dec (10, 2), [free (%)] dec (5, 2), [==] char(2), Data dec (10, 2), Data_Used dec (10, 2), [Data_Used (%)] dec (5, 2), Data_Free dec (10, 2), [Data_Free (%)] dec (5, 2), [++] char(2), Log dec (10, 2), Log_Used dec (10, 2), [Log_Used (%)] dec (5, 2), Log_Free dec (10, 2), [Log_Free (%)] dec (5, 2) );
INSERT INTO @Tbl_Final SELECT x.DatabaseName, x.Data + y.Log AS 'TOTAL', '=' AS '=', x.Data_Used + y.Log_Used AS 'U', (x.Data_Used + y.Log_Used)*100.0 / (x.Data + y.Log) AS 'U(%)', '+' AS '+', x.Data_Free + y.Log_Free AS 'F', (x.Data_Free + y.Log_Free)*100.0 / (x.Data + y.Log) AS 'F(%)', '==' AS '==', x.Data, x.Data_Used, x.Data_Used*100/x.Data AS 'D_U(%)', x.Data_Free, x.Data_Free*100/x.Data AS 'D_F(%)', '++' AS '++', y.Log, y.Log_Used, y.Log_Used*100/y.Log AS 'L_U(%)', y.Log_Free, y.Log_Free*100/y.Log AS 'L_F(%)' FROM ( SELECT d.DatabaseName, SUM(d.T) AS 'Data', SUM(d.U) AS 'Data_Used', SUM(d.F) AS 'Data_Free' FROM dbo.##Tbl_CombinedInfo d WHERE d.type = 'Data' GROUP BY d.DatabaseName ) AS x JOIN ( SELECT l.DatabaseName, SUM(l.T) AS 'Log', SUM(l.U) AS 'Log_Used', SUM(l.F) AS 'Log_Free' FROM dbo.##Tbl_CombinedInfo l WHERE l.type = 'Log' GROUP BY l.DatabaseName ) AS y ON x.DatabaseName = y.DatabaseName;
IF @Unit = 'KB' UPDATE @Tbl_Final SET TOTAL = TOTAL * 1024, used = used * 1024, free = free * 1024, Data = Data * 1024, Data_Used = Data_Used * 1024, Data_Free = Data_Free * 1024, Log = Log * 1024, Log_Used = Log_Used * 1024, Log_Free = Log_Free * 1024;
IF @Unit = 'GB' UPDATE @Tbl_Final SET TOTAL = TOTAL / 1024, used = used / 1024, free = free / 1024, Data = Data / 1024, Data_Used = Data_Used / 1024, Data_Free = Data_Free / 1024, Log = Log / 1024, Log_Used = Log_Used / 1024, Log_Free = Log_Free / 1024;
DECLARE @GrantTotal dec(11, 2); SELECT @GrantTotal = SUM(TOTAL) FROM @Tbl_Final;
SELECT CONVERT(dec(10, 2), TOTAL*100.0/@GrantTotal) AS 'WEIGHT (%)', DatabaseName AS 'DATABASE', CONVERT(VARCHAR(12), used) + ' (' + CONVERT(VARCHAR(12), [used (%)]) + ' %)' AS 'USED (%)', [+], CONVERT(VARCHAR(12), free) + ' (' + CONVERT(VARCHAR(12), [free (%)]) + ' %)' AS 'FREE (%)', [=], TOTAL, [=], CONVERT(VARCHAR(12), Data) + ' (' + CONVERT(VARCHAR(12), Data_Used) + ', ' + CONVERT(VARCHAR(12), [Data_Used (%)]) + '%)' AS 'DATA (used, %)', [+], CONVERT(VARCHAR(12), Log) + ' (' + CONVERT(VARCHAR(12), Log_Used) + ', ' + CONVERT(VARCHAR(12), [Log_Used (%)]) + '%)' AS 'LOG (used, %)' FROM @Tbl_Final WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%') ORDER BY DatabaseName ASC;
IF @TargetDatabase IS NULL SELECT CASE WHEN @Unit = 'GB' THEN 'GB' WHEN @Unit = 'KB' THEN 'KB' ELSE 'MB' END AS 'SUM', SUM (used) AS 'USED', SUM (free) AS 'FREE', SUM (TOTAL) AS 'TOTAL', SUM (Data) AS 'DATA', SUM (Log) AS 'LOG' FROM @Tbl_Final; END
RETURN (0)
GO | Find out complete SQL Server database size I need to know how much space occupies all the databases inside an SQL Server 2000. I did some research but could not found any script to help me out. | TITLE:
Find out complete SQL Server database size
QUESTION:
I need to know how much space occupies all the databases inside an SQL Server 2000. I did some research but could not found any script to help me out.
ANSWER:
Source: http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1313431,00.html Works with SQL2000,2005,2008 USE master; GO
IF OBJECT_ID('dbo.sp_SDS', 'P') IS NOT NULL DROP PROCEDURE dbo.sp_SDS; GO
CREATE PROCEDURE dbo.sp_SDS @TargetDatabase sysname = NULL, -- NULL: all dbs @Level varchar(10) = 'Database', -- or "File" @UpdateUsage bit = 0, -- default no update @Unit char(2) = 'MB' -- Megabytes, Kilobytes or Gigabytes AS
/************************************************************************************************** ** ** author: Richard Ding ** date: 4/8/2008 ** usage: list db size AND path w/o SUMmary ** test code: sp_SDS -- default behavior ** sp_SDS 'maAster' ** sp_SDS NULL, NULL, 0 ** sp_SDS NULL, 'file', 1, 'GB' ** sp_SDS 'Test_snapshot', 'Database', 1 ** sp_SDS 'Test', 'File', 0, 'kb' ** sp_SDS 'pfaids', 'Database', 0, 'gb' ** sp_SDS 'tempdb', NULL, 1, 'kb' ** **************************************************************************************************/
SET NOCOUNT ON;
IF @TargetDatabase IS NOT NULL AND DB_ID(@TargetDatabase) IS NULL BEGIN RAISERROR(15010, -1, -1, @TargetDatabase); RETURN (-1) END
IF OBJECT_ID('tempdb.dbo.##Tbl_CombinedInfo', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_CombinedInfo;
IF OBJECT_ID('tempdb.dbo.##Tbl_DbFileStats', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_DbFileStats;
IF OBJECT_ID('tempdb.dbo.##Tbl_ValidDbs', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_ValidDbs;
IF OBJECT_ID('tempdb.dbo.##Tbl_Logs', 'U') IS NOT NULL DROP TABLE dbo.##Tbl_Logs;
CREATE TABLE dbo.##Tbl_CombinedInfo ( DatabaseName sysname NULL, [type] VARCHAR(10) NULL, LogicalName sysname NULL, T dec(10, 2) NULL, U dec(10, 2) NULL, [U(%)] dec(5, 2) NULL, F dec(10, 2) NULL, [F(%)] dec(5, 2) NULL, PhysicalName sysname NULL );
CREATE TABLE dbo.##Tbl_DbFileStats ( Id int identity, DatabaseName sysname NULL, FileId int NULL, FileGroup int NULL, TotalExtents bigint NULL, UsedExtents bigint NULL, Name sysname NULL, FileName varchar(255) NULL );
CREATE TABLE dbo.##Tbl_ValidDbs ( Id int identity, Dbname sysname NULL );
CREATE TABLE dbo.##Tbl_Logs ( DatabaseName sysname NULL, LogSize dec (10, 2) NULL, LogSpaceUsedPercent dec (5, 2) NULL, Status int NULL );
DECLARE @Ver varchar(10), @DatabaseName sysname, @Ident_last int, @String varchar(2000), @BaseString varchar(2000);
SELECT @DatabaseName = '', @Ident_last = 0, @String = '', @Ver = CASE WHEN @@VERSION LIKE '%9.0%' THEN 'SQL 2005' WHEN @@VERSION LIKE '%8.0%' THEN 'SQL 2000' WHEN @@VERSION LIKE '%10.0%' THEN 'SQL 2008' END;
SELECT @BaseString = ' SELECT DB_NAME(), ' + CASE WHEN @Ver = 'SQL 2000' THEN 'CASE WHEN status & 0x40 = 0x40 THEN ''Log'' ELSE ''Data'' END' ELSE ' CASE type WHEN 0 THEN ''Data'' WHEN 1 THEN ''Log'' WHEN 4 THEN ''Full-text'' ELSE ''reserved'' END' END + ', name, ' + CASE WHEN @Ver = 'SQL 2000' THEN 'filename' ELSE 'physical_name' END + ', size*8.0/1024.0 FROM ' + CASE WHEN @Ver = 'SQL 2000' THEN 'sysfiles' ELSE 'sys.database_files' END + ' WHERE ' + CASE WHEN @Ver = 'SQL 2000' THEN ' HAS_DBACCESS(DB_NAME()) = 1' ELSE 'state_desc = ''ONLINE''' END + '';
SELECT @String = 'INSERT INTO dbo.##Tbl_ValidDbs SELECT name FROM ' + CASE WHEN @Ver = 'SQL 2000' THEN 'master.dbo.sysdatabases' WHEN @Ver IN ('SQL 2005', 'SQL 2008') THEN 'master.sys.databases' END + ' WHERE HAS_DBACCESS(name) = 1 ORDER BY name ASC'; EXEC (@String);
INSERT INTO dbo.##Tbl_Logs EXEC ('DBCC SQLPERF (LOGSPACE) WITH NO_INFOMSGS');
-- For data part IF @TargetDatabase IS NOT NULL BEGIN SELECT @DatabaseName = @TargetDatabase; IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName,'Status') = 'ONLINE' AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY' BEGIN SELECT @String = 'USE [' + @DatabaseName + '] DBCC UPDATEUSAGE (0)'; PRINT '*** ' + @String + ' *** '; EXEC (@String); PRINT ''; END
SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;
INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName) EXEC ('USE [' + @DatabaseName + '] DBCC SHOWFILESTATS WITH NO_INFOMSGS'); EXEC ('USE [' + @DatabaseName + '] ' + @String);
UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName; END ELSE BEGIN WHILE 1 = 1 BEGIN SELECT TOP 1 @DatabaseName = Dbname FROM dbo.##Tbl_ValidDbs WHERE Dbname > @DatabaseName ORDER BY Dbname ASC; IF @@ROWCOUNT = 0 BREAK; IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName, 'Status') = 'ONLINE' AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY' BEGIN SELECT @String = 'DBCC UPDATEUSAGE (''' + @DatabaseName + ''') '; PRINT '*** ' + @String + '*** '; EXEC (@String); PRINT ''; END
SELECT @Ident_last = ISNULL(MAX(Id), 0) FROM dbo.##Tbl_DbFileStats;
SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;
EXEC ('USE [' + @DatabaseName + '] ' + @String);
INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName) EXEC ('USE [' + @DatabaseName + '] DBCC SHOWFILESTATS WITH NO_INFOMSGS');
UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName WHERE Id BETWEEN @Ident_last + 1 AND @@IDENTITY; END END
-- set used size for data files, do not change total obtained from sys.database_files as it has for log files UPDATE dbo.##Tbl_CombinedInfo SET U = s.UsedExtents*8*8/1024.0 FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_DbFileStats s ON t.LogicalName = s.Name AND s.DatabaseName = t.DatabaseName;
-- set used size and % values for log files: UPDATE dbo.##Tbl_CombinedInfo SET [U(%)] = LogSpaceUsedPercent, U = T * LogSpaceUsedPercent/100.0 FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_Logs l ON l.DatabaseName = t.DatabaseName WHERE t.type = 'Log';
UPDATE dbo.##Tbl_CombinedInfo SET F = T - U, [U(%)] = U*100.0/T;
UPDATE dbo.##Tbl_CombinedInfo SET [F(%)] = F*100.0/T;
IF UPPER(ISNULL(@Level, 'DATABASE')) = 'FILE' BEGIN IF @Unit = 'KB' UPDATE dbo.##Tbl_CombinedInfo SET T = T * 1024, U = U * 1024, F = F * 1024;
IF @Unit = 'GB' UPDATE dbo.##Tbl_CombinedInfo SET T = T / 1024, U = U / 1024, F = F / 1024;
SELECT DatabaseName AS 'Database', type AS 'Type', LogicalName, T AS 'Total', U AS 'Used', [U(%)] AS 'Used (%)', F AS 'Free', [F(%)] AS 'Free (%)', PhysicalName FROM dbo.##Tbl_CombinedInfo WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%') ORDER BY DatabaseName ASC, type ASC;
SELECT CASE WHEN @Unit = 'GB' THEN 'GB' WHEN @Unit = 'KB' THEN 'KB' ELSE 'MB' END AS 'SUM', SUM (T) AS 'TOTAL', SUM (U) AS 'USED', SUM (F) AS 'FREE' FROM dbo.##Tbl_CombinedInfo; END
IF UPPER(ISNULL(@Level, 'DATABASE')) = 'DATABASE' BEGIN DECLARE @Tbl_Final TABLE ( DatabaseName sysname NULL, TOTAL dec (10, 2), [=] char(1), used dec (10, 2), [used (%)] dec (5, 2), [+] char(1), free dec (10, 2), [free (%)] dec (5, 2), [==] char(2), Data dec (10, 2), Data_Used dec (10, 2), [Data_Used (%)] dec (5, 2), Data_Free dec (10, 2), [Data_Free (%)] dec (5, 2), [++] char(2), Log dec (10, 2), Log_Used dec (10, 2), [Log_Used (%)] dec (5, 2), Log_Free dec (10, 2), [Log_Free (%)] dec (5, 2) );
INSERT INTO @Tbl_Final SELECT x.DatabaseName, x.Data + y.Log AS 'TOTAL', '=' AS '=', x.Data_Used + y.Log_Used AS 'U', (x.Data_Used + y.Log_Used)*100.0 / (x.Data + y.Log) AS 'U(%)', '+' AS '+', x.Data_Free + y.Log_Free AS 'F', (x.Data_Free + y.Log_Free)*100.0 / (x.Data + y.Log) AS 'F(%)', '==' AS '==', x.Data, x.Data_Used, x.Data_Used*100/x.Data AS 'D_U(%)', x.Data_Free, x.Data_Free*100/x.Data AS 'D_F(%)', '++' AS '++', y.Log, y.Log_Used, y.Log_Used*100/y.Log AS 'L_U(%)', y.Log_Free, y.Log_Free*100/y.Log AS 'L_F(%)' FROM ( SELECT d.DatabaseName, SUM(d.T) AS 'Data', SUM(d.U) AS 'Data_Used', SUM(d.F) AS 'Data_Free' FROM dbo.##Tbl_CombinedInfo d WHERE d.type = 'Data' GROUP BY d.DatabaseName ) AS x JOIN ( SELECT l.DatabaseName, SUM(l.T) AS 'Log', SUM(l.U) AS 'Log_Used', SUM(l.F) AS 'Log_Free' FROM dbo.##Tbl_CombinedInfo l WHERE l.type = 'Log' GROUP BY l.DatabaseName ) AS y ON x.DatabaseName = y.DatabaseName;
IF @Unit = 'KB' UPDATE @Tbl_Final SET TOTAL = TOTAL * 1024, used = used * 1024, free = free * 1024, Data = Data * 1024, Data_Used = Data_Used * 1024, Data_Free = Data_Free * 1024, Log = Log * 1024, Log_Used = Log_Used * 1024, Log_Free = Log_Free * 1024;
IF @Unit = 'GB' UPDATE @Tbl_Final SET TOTAL = TOTAL / 1024, used = used / 1024, free = free / 1024, Data = Data / 1024, Data_Used = Data_Used / 1024, Data_Free = Data_Free / 1024, Log = Log / 1024, Log_Used = Log_Used / 1024, Log_Free = Log_Free / 1024;
DECLARE @GrantTotal dec(11, 2); SELECT @GrantTotal = SUM(TOTAL) FROM @Tbl_Final;
SELECT CONVERT(dec(10, 2), TOTAL*100.0/@GrantTotal) AS 'WEIGHT (%)', DatabaseName AS 'DATABASE', CONVERT(VARCHAR(12), used) + ' (' + CONVERT(VARCHAR(12), [used (%)]) + ' %)' AS 'USED (%)', [+], CONVERT(VARCHAR(12), free) + ' (' + CONVERT(VARCHAR(12), [free (%)]) + ' %)' AS 'FREE (%)', [=], TOTAL, [=], CONVERT(VARCHAR(12), Data) + ' (' + CONVERT(VARCHAR(12), Data_Used) + ', ' + CONVERT(VARCHAR(12), [Data_Used (%)]) + '%)' AS 'DATA (used, %)', [+], CONVERT(VARCHAR(12), Log) + ' (' + CONVERT(VARCHAR(12), Log_Used) + ', ' + CONVERT(VARCHAR(12), [Log_Used (%)]) + '%)' AS 'LOG (used, %)' FROM @Tbl_Final WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%') ORDER BY DatabaseName ASC;
IF @TargetDatabase IS NULL SELECT CASE WHEN @Unit = 'GB' THEN 'GB' WHEN @Unit = 'KB' THEN 'KB' ELSE 'MB' END AS 'SUM', SUM (used) AS 'USED', SUM (free) AS 'FREE', SUM (TOTAL) AS 'TOTAL', SUM (Data) AS 'DATA', SUM (Log) AS 'LOG' FROM @Tbl_Final; END
RETURN (0)
GO | [
"sql-server"
] | 8 | 6 | 10,959 | 3 | 0 | 2008-08-16T00:10:24.443000 | 2008-08-16T00:19:14.863000 |
12,927 | 12,961 | If you have a Java application that is consuming CPU when it isn't doing anything, how do you determine what it is doing? | I am calling a vendor's Java API, and on some servers it appears that the JVM goes into a low priority polling loop after logging into the API (CPU at 100% usage). The same app on other servers does not exhibit this behavior. This happens on WebSphere and Tomcat. The environment is tricky to set up so it is difficult to try to do something like profiling within Eclipse. Is there a way to profile (or some other method of inspecting) an existing Java app running in Tomcat to find out what methods are being executed while it's in this spinwait kind of state? The app is only executing one method when it gets in this state (vendor's method). Vendor can't replicate the behavior (of course). Update: Using JConsole I was able to determine who was running and what they were doing. It took me a few hours to then figure out why it was doing it. The problem ended up being that the vendor's API jar that was being used did not match exactly to the the database configuration that it was using. It was defaulting to having tracing and performance monitoring enabled on the servers that had the slight mis-match in configuration. I used a different jar and all is well. So thanks, Joshua, for your answer. JConsole was extremely easy to setup and use to monitor an existing application. @Cringe - I did some experimenting with some of the options you suggested. I had some problems with getting JProfiler set up, it looks good (but pricey). Going forward I went ahead and added the Eclipse Profiler plugin and I'll be looking over the different open source profilers to compare functionality. | If you are using Java 5 or later, you can connect to your application using jconsole to view all running threads. jstack also will do a stack dump. I think this should still work even inside a container like Tomcat. Both of these tools are included with JDK5 and later (I assume the process needs to be at least Java 5, though I could be wrong) Update: It's also worth noting that starting with JDK 1.6 update 7 there is now a bundled profiler called VisualVM which can be launched with 'jvisualvm'. It looks like it is a java.net project, so additional info may be available at that page. I haven't used this yet but it looks useful for more serious analysis. Hope that helps | If you have a Java application that is consuming CPU when it isn't doing anything, how do you determine what it is doing? I am calling a vendor's Java API, and on some servers it appears that the JVM goes into a low priority polling loop after logging into the API (CPU at 100% usage). The same app on other servers does not exhibit this behavior. This happens on WebSphere and Tomcat. The environment is tricky to set up so it is difficult to try to do something like profiling within Eclipse. Is there a way to profile (or some other method of inspecting) an existing Java app running in Tomcat to find out what methods are being executed while it's in this spinwait kind of state? The app is only executing one method when it gets in this state (vendor's method). Vendor can't replicate the behavior (of course). Update: Using JConsole I was able to determine who was running and what they were doing. It took me a few hours to then figure out why it was doing it. The problem ended up being that the vendor's API jar that was being used did not match exactly to the the database configuration that it was using. It was defaulting to having tracing and performance monitoring enabled on the servers that had the slight mis-match in configuration. I used a different jar and all is well. So thanks, Joshua, for your answer. JConsole was extremely easy to setup and use to monitor an existing application. @Cringe - I did some experimenting with some of the options you suggested. I had some problems with getting JProfiler set up, it looks good (but pricey). Going forward I went ahead and added the Eclipse Profiler plugin and I'll be looking over the different open source profilers to compare functionality. | TITLE:
If you have a Java application that is consuming CPU when it isn't doing anything, how do you determine what it is doing?
QUESTION:
I am calling a vendor's Java API, and on some servers it appears that the JVM goes into a low priority polling loop after logging into the API (CPU at 100% usage). The same app on other servers does not exhibit this behavior. This happens on WebSphere and Tomcat. The environment is tricky to set up so it is difficult to try to do something like profiling within Eclipse. Is there a way to profile (or some other method of inspecting) an existing Java app running in Tomcat to find out what methods are being executed while it's in this spinwait kind of state? The app is only executing one method when it gets in this state (vendor's method). Vendor can't replicate the behavior (of course). Update: Using JConsole I was able to determine who was running and what they were doing. It took me a few hours to then figure out why it was doing it. The problem ended up being that the vendor's API jar that was being used did not match exactly to the the database configuration that it was using. It was defaulting to having tracing and performance monitoring enabled on the servers that had the slight mis-match in configuration. I used a different jar and all is well. So thanks, Joshua, for your answer. JConsole was extremely easy to setup and use to monitor an existing application. @Cringe - I did some experimenting with some of the options you suggested. I had some problems with getting JProfiler set up, it looks good (but pricey). Going forward I went ahead and added the Eclipse Profiler plugin and I'll be looking over the different open source profilers to compare functionality.
ANSWER:
If you are using Java 5 or later, you can connect to your application using jconsole to view all running threads. jstack also will do a stack dump. I think this should still work even inside a container like Tomcat. Both of these tools are included with JDK5 and later (I assume the process needs to be at least Java 5, though I could be wrong) Update: It's also worth noting that starting with JDK 1.6 update 7 there is now a bundled profiler called VisualVM which can be launched with 'jvisualvm'. It looks like it is a java.net project, so additional info may be available at that page. I haven't used this yet but it looks useful for more serious analysis. Hope that helps | [
"java",
"profiling"
] | 16 | 18 | 5,664 | 8 | 0 | 2008-08-16T00:45:38.210000 | 2008-08-16T02:12:04.090000 |
12,929 | 12,945 | Is a Homogeneous development platform good for the industry? | Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be). | Defacto standards are bad because they are usually controlled by a single party. What is best for the industry is for there to be a foundation of open standards on top of which everyone can compete. The web is a perfect example. When IE won the browser war, it stagnated for years, and is only just now starting to improve because it's hemorrhaging marketshare. The Netscape years prior to that weren't much better. The CSS 2.1 standard was released ten years ago and still isn't supported well. As a consequence, web development is a Black Art of hacks and work-arounds to get websites to render consistently. My job would be a hundred times easier if I could build a website according to web standards and be confident it would display correctly. Just think of all the cool things we could have been working on instead of fixing IE's rendering errors. | Is a Homogeneous development platform good for the industry? Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be). | TITLE:
Is a Homogeneous development platform good for the industry?
QUESTION:
Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be).
ANSWER:
Defacto standards are bad because they are usually controlled by a single party. What is best for the industry is for there to be a foundation of open standards on top of which everyone can compete. The web is a perfect example. When IE won the browser war, it stagnated for years, and is only just now starting to improve because it's hemorrhaging marketshare. The Netscape years prior to that weren't much better. The CSS 2.1 standard was released ten years ago and still isn't supported well. As a consequence, web development is a Black Art of hacks and work-arounds to get websites to render consistently. My job would be a hundred times easier if I could build a website according to web standards and be confident it would display correctly. Just think of all the cool things we could have been working on instead of fixing IE's rendering errors. | [
"cross-platform"
] | 5 | 11 | 511 | 4 | 0 | 2008-08-16T00:46:54.167000 | 2008-08-16T01:16:51.750000 |
12,936 | 16,749 | Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster? | Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these? | Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code. I did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results: Zend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer. When it comes to caches: * fastest: eAccelerator * XCache * APC And: You DO want to install a opcode cache! For example: alt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png This is the duration it took to call the wordpress homepage 10.000 times. Edit: BTW, eAccelerator contains an optimizer itself. | Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster? Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these? | TITLE:
Is using PHP accelerators such as MMCache or Zend Accelerator making PHP faster?
QUESTION:
Does anybody have experience working with PHP accelerators such as MMCache or Zend Accelerator? I'd like to know if using either of these makes PHP comparable to faster web-technologies. Also, are there trade offs for using these?
ANSWER:
Note that Zend Optimizer and MMCache (or similar applications) are totally different things. While Zend Optimizer tries to optimize the program opcode MMCache will cache the scripts in memory and reuse the precompiled code. I did some benchmarks some time ago and you can find the results in my blog (in German though). The basic results: Zend Optimizer alone didn't help at all. Actually my scripts were slower than without optimizer. When it comes to caches: * fastest: eAccelerator * XCache * APC And: You DO want to install a opcode cache! For example: alt text http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png This is the duration it took to call the wordpress homepage 10.000 times. Edit: BTW, eAccelerator contains an optimizer itself. | [
"php",
"caching",
"zend-optimizer"
] | 17 | 13 | 3,003 | 10 | 0 | 2008-08-16T00:55:53.010000 | 2008-08-19T19:03:26.540000 |
12,946 | 12,979 | Database replication. 2 servers, Master database and the 2nd is read-only | Say you have 2 database servers, one database is the 'master' database where all write operations are performed, it is treated as the 'real/original' database. The other server's database is to be a mirror copy of the master database (slave?), which will be used for read only operations for a certain part of the application. How do you go about setting up a slave database that mirrors the data on the master database? From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct? What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?). | What you want is called Transactional Replication in SQL Server 2005. It will replicate changes in near real time as the publisher (i.e. "master") database is updated. Here is a pretty good walk through of how to set it up. | Database replication. 2 servers, Master database and the 2nd is read-only Say you have 2 database servers, one database is the 'master' database where all write operations are performed, it is treated as the 'real/original' database. The other server's database is to be a mirror copy of the master database (slave?), which will be used for read only operations for a certain part of the application. How do you go about setting up a slave database that mirrors the data on the master database? From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct? What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?). | TITLE:
Database replication. 2 servers, Master database and the 2nd is read-only
QUESTION:
Say you have 2 database servers, one database is the 'master' database where all write operations are performed, it is treated as the 'real/original' database. The other server's database is to be a mirror copy of the master database (slave?), which will be used for read only operations for a certain part of the application. How do you go about setting up a slave database that mirrors the data on the master database? From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct? What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?).
ANSWER:
What you want is called Transactional Replication in SQL Server 2005. It will replicate changes in near real time as the publisher (i.e. "master") database is updated. Here is a pretty good walk through of how to set it up. | [
"sql-server",
"replication"
] | 8 | 9 | 20,835 | 5 | 0 | 2008-08-16T01:18:46.007000 | 2008-08-16T03:31:52.867000 |
12,982 | 19,726 | What is your preferred method of sending complex data over a web service? | It's 2008, and I'm still torn on this one. So I'm developing a web method that needs a complex type passed into it and returned from it. The two options I'm toying with are: Pass and return actual business objects with both data and behavior. When wsdl.exe is run, it will automatically create proxy classes that contain just the data portion, and these will be automatically converted to and from my real business objects on the server side. On the client side, they will only get to use the dumb proxy type, and they will have to map them into some real business objects as they see fit. A big drawback here is that if I "own" both the server and client side, and I want to use the same set of real business objects, I can run into certain headaches with name conflicts, etc. (Since the real objects and the proxies are named the same.) Forget trying to pass "real" business objects. Instead, just create simple DataTransfer objects which I will map back and forth to my real business objects manually. They still get copied to new proxy objects by wsdl.exe anyway, but at least I'm not tricking myself into thinking that web services can natively handle objects with business logic in them. By the way - Does anyone know how to tell wsdl.exe to not make a copy of the object? Shouldn't we be able to just tell it, "Hey, use this existing type right over here. Don't copy it!" Anyway, I've kinda settled on #2 for now, but I'm curious what you all think. I have a feeling there are way better ways to do this in general, and I may not even be totally accurate on all my points, so please let me know what your experiences have been. Update: I just found out that VS 2008 has an option to reuse existing types when adding a "Service Reference", rather than creating brand new identical type in the proxy file. Sweet. | there is also an argument for separating the tiers - have a set of serializable objects that get passed to and from the web service and a translator to map and convert between that set and the business objects (which might have properties not suitable for passing over the wire) Its the approach favoured by the web service software factory service factory and means that you can change your business objects without breaking the web service interface/contract | What is your preferred method of sending complex data over a web service? It's 2008, and I'm still torn on this one. So I'm developing a web method that needs a complex type passed into it and returned from it. The two options I'm toying with are: Pass and return actual business objects with both data and behavior. When wsdl.exe is run, it will automatically create proxy classes that contain just the data portion, and these will be automatically converted to and from my real business objects on the server side. On the client side, they will only get to use the dumb proxy type, and they will have to map them into some real business objects as they see fit. A big drawback here is that if I "own" both the server and client side, and I want to use the same set of real business objects, I can run into certain headaches with name conflicts, etc. (Since the real objects and the proxies are named the same.) Forget trying to pass "real" business objects. Instead, just create simple DataTransfer objects which I will map back and forth to my real business objects manually. They still get copied to new proxy objects by wsdl.exe anyway, but at least I'm not tricking myself into thinking that web services can natively handle objects with business logic in them. By the way - Does anyone know how to tell wsdl.exe to not make a copy of the object? Shouldn't we be able to just tell it, "Hey, use this existing type right over here. Don't copy it!" Anyway, I've kinda settled on #2 for now, but I'm curious what you all think. I have a feeling there are way better ways to do this in general, and I may not even be totally accurate on all my points, so please let me know what your experiences have been. Update: I just found out that VS 2008 has an option to reuse existing types when adding a "Service Reference", rather than creating brand new identical type in the proxy file. Sweet. | TITLE:
What is your preferred method of sending complex data over a web service?
QUESTION:
It's 2008, and I'm still torn on this one. So I'm developing a web method that needs a complex type passed into it and returned from it. The two options I'm toying with are: Pass and return actual business objects with both data and behavior. When wsdl.exe is run, it will automatically create proxy classes that contain just the data portion, and these will be automatically converted to and from my real business objects on the server side. On the client side, they will only get to use the dumb proxy type, and they will have to map them into some real business objects as they see fit. A big drawback here is that if I "own" both the server and client side, and I want to use the same set of real business objects, I can run into certain headaches with name conflicts, etc. (Since the real objects and the proxies are named the same.) Forget trying to pass "real" business objects. Instead, just create simple DataTransfer objects which I will map back and forth to my real business objects manually. They still get copied to new proxy objects by wsdl.exe anyway, but at least I'm not tricking myself into thinking that web services can natively handle objects with business logic in them. By the way - Does anyone know how to tell wsdl.exe to not make a copy of the object? Shouldn't we be able to just tell it, "Hey, use this existing type right over here. Don't copy it!" Anyway, I've kinda settled on #2 for now, but I'm curious what you all think. I have a feeling there are way better ways to do this in general, and I may not even be totally accurate on all my points, so please let me know what your experiences have been. Update: I just found out that VS 2008 has an option to reuse existing types when adding a "Service Reference", rather than creating brand new identical type in the proxy file. Sweet.
ANSWER:
there is also an argument for separating the tiers - have a set of serializable objects that get passed to and from the web service and a translator to map and convert between that set and the business objects (which might have properties not suitable for passing over the wire) Its the approach favoured by the web service software factory service factory and means that you can change your business objects without breaking the web service interface/contract | [
".net",
"web-services",
"soap",
"wsdl"
] | 9 | 1 | 1,173 | 4 | 0 | 2008-08-16T03:58:24.687000 | 2008-08-21T12:31:34.190000 |
13,000 | 13,003 | How to setup site-wide variables in php? | I want to define something like this in php: $EL = "\n \n"; and then use that variable as an "endline" marker all over my site, like this: echo "Blah blah blah{$EL}"; How do I define $EL once (in only 1 file), include it on every page on my site, and not have to reference it using the (strangely backwards) global $EL; statement in every page function? | Most PHP sites should have a file (I call it a header) that you include on every single page of the site. If you put that first line of code in the header file, then include it like this on every page: include 'header.php'; you won't have to use the global keyword or anything, the second line of code you wrote should work. Edit: Oh sorry, that won't work inside functions... now I see your problem. Edit #2: Ok, take my original advice with the header, but use a define() rather than a variable. Those work inside functions after being included. | How to setup site-wide variables in php? I want to define something like this in php: $EL = "\n \n"; and then use that variable as an "endline" marker all over my site, like this: echo "Blah blah blah{$EL}"; How do I define $EL once (in only 1 file), include it on every page on my site, and not have to reference it using the (strangely backwards) global $EL; statement in every page function? | TITLE:
How to setup site-wide variables in php?
QUESTION:
I want to define something like this in php: $EL = "\n \n"; and then use that variable as an "endline" marker all over my site, like this: echo "Blah blah blah{$EL}"; How do I define $EL once (in only 1 file), include it on every page on my site, and not have to reference it using the (strangely backwards) global $EL; statement in every page function?
ANSWER:
Most PHP sites should have a file (I call it a header) that you include on every single page of the site. If you put that first line of code in the header file, then include it like this on every page: include 'header.php'; you won't have to use the global keyword or anything, the second line of code you wrote should work. Edit: Oh sorry, that won't work inside functions... now I see your problem. Edit #2: Ok, take my original advice with the header, but use a define() rather than a variable. Those work inside functions after being included. | [
"php"
] | 8 | 7 | 7,858 | 11 | 0 | 2008-08-16T05:05:09.683000 | 2008-08-16T05:23:09.677000 |
13,021 | 13,193 | Is there a "concise" way to do namespacing in JavaScript? | I've frequently encountered sites that put all of their JavaScript inside a namespace structure along the lines of: namespaces = { com: { example: { example.com's data} } However, setting this up safely with respect to other namespaced frameworks seems to require a relatively hefty amount of code (defined as > 2 lines). I was wondering whether anyone knows of a concise way to do this? Furthermore, whether there's a relatively standard/consistent way to structure it? For example, is the com namespace directly attached to the global object, or is it attached through a namespace object? [Edit: whoops, obviously {com = {... } } wouldn't accomplish anything close to what I intended, thanks to Shog9 for pointing that out.] | Javascript doesn't have stand-alone namespaces. It has functions, which can provide scope for resolving names, and objects, which can contribute to the named data accessible in a given scope. Here's your example, corrected: var namespaces = { com: { example: { /* example.com's data */ } } } This is a variable namespaces being assigned an object literal. The object contains one property: com, an object with one property: example, an object which presumably would contain something interesting. So, you can type something like namespaces.com.example. somePropertyOrFunctionOnExample and it'll all work. Of course, it's also ridiculous. You don't have a hierarchical namespace, you have an object containing an object containing an object with the stuff you actually care about. var com_example_data = { /* example.com's data */ }; That works just as well, without the pointless hierarchy. Now, if you actually want to build a hierarchy, you can try something like this: com_example = com_example || {}; com_example.flags = com_example.flags || { active: false, restricted: true};
com_example.ops = com_example.ops || (function() { var launchCodes = "38925491753824"; // hidden / private return { activate: function() { /*... */ }, destroyTheWorld: function() { /*... */ } }; })();...which is, IMHO, reasonably concise. | Is there a "concise" way to do namespacing in JavaScript? I've frequently encountered sites that put all of their JavaScript inside a namespace structure along the lines of: namespaces = { com: { example: { example.com's data} } However, setting this up safely with respect to other namespaced frameworks seems to require a relatively hefty amount of code (defined as > 2 lines). I was wondering whether anyone knows of a concise way to do this? Furthermore, whether there's a relatively standard/consistent way to structure it? For example, is the com namespace directly attached to the global object, or is it attached through a namespace object? [Edit: whoops, obviously {com = {... } } wouldn't accomplish anything close to what I intended, thanks to Shog9 for pointing that out.] | TITLE:
Is there a "concise" way to do namespacing in JavaScript?
QUESTION:
I've frequently encountered sites that put all of their JavaScript inside a namespace structure along the lines of: namespaces = { com: { example: { example.com's data} } However, setting this up safely with respect to other namespaced frameworks seems to require a relatively hefty amount of code (defined as > 2 lines). I was wondering whether anyone knows of a concise way to do this? Furthermore, whether there's a relatively standard/consistent way to structure it? For example, is the com namespace directly attached to the global object, or is it attached through a namespace object? [Edit: whoops, obviously {com = {... } } wouldn't accomplish anything close to what I intended, thanks to Shog9 for pointing that out.]
ANSWER:
Javascript doesn't have stand-alone namespaces. It has functions, which can provide scope for resolving names, and objects, which can contribute to the named data accessible in a given scope. Here's your example, corrected: var namespaces = { com: { example: { /* example.com's data */ } } } This is a variable namespaces being assigned an object literal. The object contains one property: com, an object with one property: example, an object which presumably would contain something interesting. So, you can type something like namespaces.com.example. somePropertyOrFunctionOnExample and it'll all work. Of course, it's also ridiculous. You don't have a hierarchical namespace, you have an object containing an object containing an object with the stuff you actually care about. var com_example_data = { /* example.com's data */ }; That works just as well, without the pointless hierarchy. Now, if you actually want to build a hierarchy, you can try something like this: com_example = com_example || {}; com_example.flags = com_example.flags || { active: false, restricted: true};
com_example.ops = com_example.ops || (function() { var launchCodes = "38925491753824"; // hidden / private return { activate: function() { /*... */ }, destroyTheWorld: function() { /*... */ } }; })();...which is, IMHO, reasonably concise. | [
"javascript",
"namespaces"
] | 15 | 19 | 3,550 | 8 | 0 | 2008-08-16T05:57:37.487000 | 2008-08-16T16:04:18.260000 |
13,049 | 13,275 | What's the difference between struct and class in .NET? | What's the difference between struct and class in.NET? | In.NET, there are two categories of types, reference types and value types. Structs are value types and classes are reference types. The general difference is that a reference type lives on the heap, and a value type lives inline, that is, wherever it is your variable or field is defined. A variable containing a value type contains the entire value type value. For a struct, that means that the variable contains the entire struct, with all its fields. A variable containing a reference type contains a pointer, or a reference to somewhere else in memory where the actual value resides. This has one benefit, to begin with: value types always contains a value reference types can contain a null -reference, meaning that they don't refer to anything at all at the moment Internally, reference type s are implemented as pointers, and knowing that, and knowing how variable assignment works, there are other behavioral patterns: copying the contents of a value type variable into another variable, copies the entire contents into the new variable, making the two distinct. In other words, after the copy, changes to one won't affect the other copying the contents of a reference type variable into another variable, copies the reference, which means you now have two references to the same somewhere else storage of the actual data. In other words, after the copy, changing the data in one reference will appear to affect the other as well, but only because you're really just looking at the same data both places When you declare variables or fields, here's how the two types differ: variable: value type lives on the stack, reference type lives on the stack as a pointer to somewhere in heap memory where the actual memory lives (though note Eric Lipperts article series: The Stack Is An Implementation Detail.) class/struct-field: value type lives completely inside the type, reference type lives inside the type as a pointer to somewhere in heap memory where the actual memory lives. | What's the difference between struct and class in .NET? What's the difference between struct and class in.NET? | TITLE:
What's the difference between struct and class in .NET?
QUESTION:
What's the difference between struct and class in.NET?
ANSWER:
In.NET, there are two categories of types, reference types and value types. Structs are value types and classes are reference types. The general difference is that a reference type lives on the heap, and a value type lives inline, that is, wherever it is your variable or field is defined. A variable containing a value type contains the entire value type value. For a struct, that means that the variable contains the entire struct, with all its fields. A variable containing a reference type contains a pointer, or a reference to somewhere else in memory where the actual value resides. This has one benefit, to begin with: value types always contains a value reference types can contain a null -reference, meaning that they don't refer to anything at all at the moment Internally, reference type s are implemented as pointers, and knowing that, and knowing how variable assignment works, there are other behavioral patterns: copying the contents of a value type variable into another variable, copies the entire contents into the new variable, making the two distinct. In other words, after the copy, changes to one won't affect the other copying the contents of a reference type variable into another variable, copies the reference, which means you now have two references to the same somewhere else storage of the actual data. In other words, after the copy, changing the data in one reference will appear to affect the other as well, but only because you're really just looking at the same data both places When you declare variables or fields, here's how the two types differ: variable: value type lives on the stack, reference type lives on the stack as a pointer to somewhere in heap memory where the actual memory lives (though note Eric Lipperts article series: The Stack Is An Implementation Detail.) class/struct-field: value type lives completely inside the type, reference type lives inside the type as a pointer to somewhere in heap memory where the actual memory lives. | [
".net",
"class",
"struct",
"value-type",
"reference-type"
] | 901 | 1,271 | 523,683 | 19 | 0 | 2008-08-16T08:21:47.947000 | 2008-08-16T18:41:38.037000 |
13,055 | 25,324 | What is boxing and unboxing and what are the trade offs? | I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. | Boxed values are data structures that are minimal wrappers around primitive types *. Boxed values are typically stored as pointers to objects on the heap. Thus, boxed values use more memory and take at minimum two memory lookups to access: once to get the pointer, and another to follow that pointer to the primitive. Obviously this isn't the kind of thing you want in your inner loops. On the other hand, boxed values typically play better with other types in the system. Since they are first-class data structures in the language, they have the expected metadata and structure that other data structures have. In Java and Haskell generic collections can't contain unboxed values. Generic collections in.NET can hold unboxed values with no penalties. Where Java's generics are only used for compile-time type checking,.NET will generate specific classes for each generic type instantiated at run time. Java and Haskell have unboxed arrays, but they're distinctly less convenient than the other collections. However, when peak performance is needed it's worth a little inconvenience to avoid the overhead of boxing and unboxing. * For this discussion, a primitive value is any that can be stored on the call stack, rather than stored as a pointer to a value on the heap. Frequently that's just the machine types (ints, floats, etc), structs, and sometimes static sized arrays..NET-land calls them value types (as opposed to reference types). Java folks call them primitive types. Haskellions just call them unboxed. ** I'm also focusing on Java, Haskell, and C# in this answer, because that's what I know. For what it's worth, Python, Ruby, and Javascript all have exclusively boxed values. This is also known as the "Everything is an object" approach***. *** Caveat: A sufficiently advanced compiler / JIT can in some cases actually detect that a value which is semantically boxed when looking at the source, can safely be an unboxed value at runtime. In essence, thanks to brilliant language implementors your boxes are sometimes free. | What is boxing and unboxing and what are the trade offs? I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. | TITLE:
What is boxing and unboxing and what are the trade offs?
QUESTION:
I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome.
ANSWER:
Boxed values are data structures that are minimal wrappers around primitive types *. Boxed values are typically stored as pointers to objects on the heap. Thus, boxed values use more memory and take at minimum two memory lookups to access: once to get the pointer, and another to follow that pointer to the primitive. Obviously this isn't the kind of thing you want in your inner loops. On the other hand, boxed values typically play better with other types in the system. Since they are first-class data structures in the language, they have the expected metadata and structure that other data structures have. In Java and Haskell generic collections can't contain unboxed values. Generic collections in.NET can hold unboxed values with no penalties. Where Java's generics are only used for compile-time type checking,.NET will generate specific classes for each generic type instantiated at run time. Java and Haskell have unboxed arrays, but they're distinctly less convenient than the other collections. However, when peak performance is needed it's worth a little inconvenience to avoid the overhead of boxing and unboxing. * For this discussion, a primitive value is any that can be stored on the call stack, rather than stored as a pointer to a value on the heap. Frequently that's just the machine types (ints, floats, etc), structs, and sometimes static sized arrays..NET-land calls them value types (as opposed to reference types). Java folks call them primitive types. Haskellions just call them unboxed. ** I'm also focusing on Java, Haskell, and C# in this answer, because that's what I know. For what it's worth, Python, Ruby, and Javascript all have exclusively boxed values. This is also known as the "Everything is an object" approach***. *** Caveat: A sufficiently advanced compiler / JIT can in some cases actually detect that a value which is semantically boxed when looking at the source, can safely be an unboxed value at runtime. In essence, thanks to brilliant language implementors your boxes are sometimes free. | [
"language-agnostic",
"boxing",
"glossary",
"unboxing"
] | 153 | 215 | 63,825 | 9 | 0 | 2008-08-16T08:34:25.143000 | 2008-08-24T20:35:12.520000 |
13,060 | 13,100 | What do ref, val and out mean on method parameters? | I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. This also applies to VB.Net, but the keywords are different - ByRef and ByVal. | By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified. void Test1(string param) { param = "new value"; }
string s1 = "initial value"; Test1(s1); // s1 == "initial value" Using out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller. Both out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised. void Test2(ref string param) { param = "new value"; }
void Test3(out string param) { // Use of param here will not compile param = "another value"; }
string s2 = "initial value"; string s3; Test2(ref s2); // s2 == "new value" // Test2(ref s3); // Passing ref s3 will not compile Test3(out s2); // s2 == "another value" Test3(out s3); // s3 == "another value" Edit: As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef ) only, matching the support of the CLR. | What do ref, val and out mean on method parameters? I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. This also applies to VB.Net, but the keywords are different - ByRef and ByVal. | TITLE:
What do ref, val and out mean on method parameters?
QUESTION:
I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. This also applies to VB.Net, but the keywords are different - ByRef and ByVal.
ANSWER:
By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified. void Test1(string param) { param = "new value"; }
string s1 = "initial value"; Test1(s1); // s1 == "initial value" Using out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller. Both out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised. void Test2(ref string param) { param = "new value"; }
void Test3(out string param) { // Use of param here will not compile param = "another value"; }
string s2 = "initial value"; string s3; Test2(ref s2); // s2 == "new value" // Test2(ref s3); // Passing ref s3 will not compile Test3(out s2); // s2 == "another value" Test3(out s3); // s3 == "another value" Edit: As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef ) only, matching the support of the CLR. | [
"c#",
".net",
"vb.net"
] | 22 | 27 | 13,914 | 4 | 0 | 2008-08-16T08:46:16.633000 | 2008-08-16T12:06:17.337000 |
13,086 | 13,098 | Delphi resources for existing .NET developer | Can anyone recommend some decent resources for a.NET developer who wishes to get a high level overview of the Delphi Programming Language? We are about acquire a small business whose main product is developed in Delphi and I am wanting to build up enough knowledge to be able to talk the talk with them. Books, websites etc all appreciated. Thanks. | DelphiBasics gives a good overview of basic syntax, library functions etc. Essential Delphi is a free e-book by Marco Cantu that should give a good overview, also of the VCL Feel free to ask around here as well, or in the Delphi newsgroups, if you encounter specific issues:) [edit] @Martin: There's a free "Turbo" edition available at the Codegear/Embarcadero website. I guess it has some limitations, so you could also try downloading the trial version. | Delphi resources for existing .NET developer Can anyone recommend some decent resources for a.NET developer who wishes to get a high level overview of the Delphi Programming Language? We are about acquire a small business whose main product is developed in Delphi and I am wanting to build up enough knowledge to be able to talk the talk with them. Books, websites etc all appreciated. Thanks. | TITLE:
Delphi resources for existing .NET developer
QUESTION:
Can anyone recommend some decent resources for a.NET developer who wishes to get a high level overview of the Delphi Programming Language? We are about acquire a small business whose main product is developed in Delphi and I am wanting to build up enough knowledge to be able to talk the talk with them. Books, websites etc all appreciated. Thanks.
ANSWER:
DelphiBasics gives a good overview of basic syntax, library functions etc. Essential Delphi is a free e-book by Marco Cantu that should give a good overview, also of the VCL Feel free to ask around here as well, or in the Delphi newsgroups, if you encounter specific issues:) [edit] @Martin: There's a free "Turbo" edition available at the Codegear/Embarcadero website. I guess it has some limitations, so you could also try downloading the trial version. | [
"delphi"
] | 5 | 4 | 498 | 5 | 0 | 2008-08-16T10:24:05.797000 | 2008-08-16T11:46:16.050000 |
13,106 | 13,111 | Should I support ASP.NET 1.1? | I've just started working on an ASP.NET project which I hope to open source once it gets to a suitable stage. It's basically going to be a library that can be used by existing websites. My preference is to support ASP.NET 2.0 through 3.5, but I wondered how many people I would be leaving out by not supporting ASP.NET 1.1? More specifically, how many people are there still using ASP.NET 1.1 for whom ASP.NET 2.0/3.5 is not an option? If upgrading your server is not an option for you, why not? | Increasingly I think not. The kind of large rigid organisation currently still clinging to 1.1 (probably because they're only just upgraded to it) is also the kind that's highly unlikely to look at open source solutions. If I were starting a new ASP.Net project right now I'd stick with.Net 3.5 and probably the new MVC previews. | Should I support ASP.NET 1.1? I've just started working on an ASP.NET project which I hope to open source once it gets to a suitable stage. It's basically going to be a library that can be used by existing websites. My preference is to support ASP.NET 2.0 through 3.5, but I wondered how many people I would be leaving out by not supporting ASP.NET 1.1? More specifically, how many people are there still using ASP.NET 1.1 for whom ASP.NET 2.0/3.5 is not an option? If upgrading your server is not an option for you, why not? | TITLE:
Should I support ASP.NET 1.1?
QUESTION:
I've just started working on an ASP.NET project which I hope to open source once it gets to a suitable stage. It's basically going to be a library that can be used by existing websites. My preference is to support ASP.NET 2.0 through 3.5, but I wondered how many people I would be leaving out by not supporting ASP.NET 1.1? More specifically, how many people are there still using ASP.NET 1.1 for whom ASP.NET 2.0/3.5 is not an option? If upgrading your server is not an option for you, why not?
ANSWER:
Increasingly I think not. The kind of large rigid organisation currently still clinging to 1.1 (probably because they're only just upgraded to it) is also the kind that's highly unlikely to look at open source solutions. If I were starting a new ASP.Net project right now I'd stick with.Net 3.5 and probably the new MVC previews. | [
"asp.net",
".net-1.1"
] | 4 | 4 | 269 | 3 | 0 | 2008-08-16T12:34:58.060000 | 2008-08-16T12:43:39.493000 |
13,109 | 13,113 | PHP: Access Array Value on the Fly | In php, I often need to map a variable using an array... but I can not seem to be able to do this in a one liner. c.f. example: // the following results in an error: echo array('a','b','c')[$key];
// this works, using an unnecessary variable: $variable = array('a','b','c'); echo $variable[$key]; This is a minor problem, but it keeps bugging every once in a while... I don't like the fact, that I use a variable for nothing;) | I wouldn't bother about that extra variable, really. If you want, though, you could also remove it from memory after you've used it: $variable = array('a','b','c'); echo $variable[$key]; unset($variable); Or, you could write a small function: function indexonce(&$ar, $index) { return $ar[$index]; } and call this with: $something = indexonce(array('a', 'b', 'c'), 2); The array should be destroyed automatically now. | PHP: Access Array Value on the Fly In php, I often need to map a variable using an array... but I can not seem to be able to do this in a one liner. c.f. example: // the following results in an error: echo array('a','b','c')[$key];
// this works, using an unnecessary variable: $variable = array('a','b','c'); echo $variable[$key]; This is a minor problem, but it keeps bugging every once in a while... I don't like the fact, that I use a variable for nothing;) | TITLE:
PHP: Access Array Value on the Fly
QUESTION:
In php, I often need to map a variable using an array... but I can not seem to be able to do this in a one liner. c.f. example: // the following results in an error: echo array('a','b','c')[$key];
// this works, using an unnecessary variable: $variable = array('a','b','c'); echo $variable[$key]; This is a minor problem, but it keeps bugging every once in a while... I don't like the fact, that I use a variable for nothing;)
ANSWER:
I wouldn't bother about that extra variable, really. If you want, though, you could also remove it from memory after you've used it: $variable = array('a','b','c'); echo $variable[$key]; unset($variable); Or, you could write a small function: function indexonce(&$ar, $index) { return $ar[$index]; } and call this with: $something = indexonce(array('a', 'b', 'c'), 2); The array should be destroyed automatically now. | [
"php",
"arrays",
"coding-style"
] | 50 | 19 | 19,534 | 9 | 0 | 2008-08-16T12:42:54.017000 | 2008-08-16T12:55:29.953000 |
13,128 | 13,140 | How can I combine several C/C++ libraries into one? | I'm tired of adding ten link libraries into my project, or requiring eight of them to use my own. I'd like to take existing libraries like libpng.a, libz.a, libjpeg.a, and combine them into one single.a library. Is that possible? How about combining.lib libraries? | On Unix like systems, the ld and ar utilities can do this. Check out http://en.wikipedia.org/wiki/Ar_(Unix) or lookup the man pages on any Linux box or through Google, e.g., 'Unix man ar'. Please note that you might be better off linking to a shared (dynamic) library. This would add a dependency to your executable, but it will dramatically reduce its size, especially if you're writing a graphic application. | How can I combine several C/C++ libraries into one? I'm tired of adding ten link libraries into my project, or requiring eight of them to use my own. I'd like to take existing libraries like libpng.a, libz.a, libjpeg.a, and combine them into one single.a library. Is that possible? How about combining.lib libraries? | TITLE:
How can I combine several C/C++ libraries into one?
QUESTION:
I'm tired of adding ten link libraries into my project, or requiring eight of them to use my own. I'd like to take existing libraries like libpng.a, libz.a, libjpeg.a, and combine them into one single.a library. Is that possible? How about combining.lib libraries?
ANSWER:
On Unix like systems, the ld and ar utilities can do this. Check out http://en.wikipedia.org/wiki/Ar_(Unix) or lookup the man pages on any Linux box or through Google, e.g., 'Unix man ar'. Please note that you might be better off linking to a shared (dynamic) library. This would add a dependency to your executable, but it will dramatically reduce its size, especially if you're writing a graphic application. | [
"c++",
"c",
"archive"
] | 47 | 9 | 39,628 | 6 | 0 | 2008-08-16T13:46:01.987000 | 2008-08-16T14:15:23.037000 |
13,160 | 13,164 | Best practice for webservices | I've created a webservice and when I want to use its methods I instantiate it in the a procedure, call the method, and I finally I dispose it, however I think also it could be okay to instantiate the webservice in the "private void Main_Load(object sender, EventArgs e)" event. The thing is that if I do it the first way I have to instantiate the webservice every time I need one of its methods but in the other way I have to keep a webservice connected all the time when I use it in a form for example. I would like to know which of these practices are better or if there's a much better way to do it Strategy 1 private void btnRead_Click(object sender, EventArgs e) { try { //Show clock this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
//Connect to webservice svc = new ForPocketPC.ServiceForPocketPC(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite;
svc.CallMethod();... } catch (Exception ex) { ShowError(ex); } finally { if (svc!= null) svc.Dispose(); } } Strategy 2 private myWebservice svc;
private void Main_Load(object sender, EventArgs e) { //Connect to webservice svc = new ForPocketPC.ServiceForPocketPC(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite; }
private void btnRead_Click(object sender, EventArgs e) { try { //Show clock this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents(); svc.CallMethod();... } catch (Exception ex) { ShowError(ex); } }
private void Main_Closing(object sender, CancelEventArgs e) { svc.Dispose(); } | It depends on how often you are going to be calling the web service. If you're going to be calling it almost constantly, it would probably be better to use method #2. However, if it's not going to be getting called quite so often, you are better off using method #1, and only instantiating it when you need it. | Best practice for webservices I've created a webservice and when I want to use its methods I instantiate it in the a procedure, call the method, and I finally I dispose it, however I think also it could be okay to instantiate the webservice in the "private void Main_Load(object sender, EventArgs e)" event. The thing is that if I do it the first way I have to instantiate the webservice every time I need one of its methods but in the other way I have to keep a webservice connected all the time when I use it in a form for example. I would like to know which of these practices are better or if there's a much better way to do it Strategy 1 private void btnRead_Click(object sender, EventArgs e) { try { //Show clock this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
//Connect to webservice svc = new ForPocketPC.ServiceForPocketPC(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite;
svc.CallMethod();... } catch (Exception ex) { ShowError(ex); } finally { if (svc!= null) svc.Dispose(); } } Strategy 2 private myWebservice svc;
private void Main_Load(object sender, EventArgs e) { //Connect to webservice svc = new ForPocketPC.ServiceForPocketPC(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite; }
private void btnRead_Click(object sender, EventArgs e) { try { //Show clock this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents(); svc.CallMethod();... } catch (Exception ex) { ShowError(ex); } }
private void Main_Closing(object sender, CancelEventArgs e) { svc.Dispose(); } | TITLE:
Best practice for webservices
QUESTION:
I've created a webservice and when I want to use its methods I instantiate it in the a procedure, call the method, and I finally I dispose it, however I think also it could be okay to instantiate the webservice in the "private void Main_Load(object sender, EventArgs e)" event. The thing is that if I do it the first way I have to instantiate the webservice every time I need one of its methods but in the other way I have to keep a webservice connected all the time when I use it in a form for example. I would like to know which of these practices are better or if there's a much better way to do it Strategy 1 private void btnRead_Click(object sender, EventArgs e) { try { //Show clock this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
//Connect to webservice svc = new ForPocketPC.ServiceForPocketPC(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite;
svc.CallMethod();... } catch (Exception ex) { ShowError(ex); } finally { if (svc!= null) svc.Dispose(); } } Strategy 2 private myWebservice svc;
private void Main_Load(object sender, EventArgs e) { //Connect to webservice svc = new ForPocketPC.ServiceForPocketPC(); svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password); svc.AllowAutoRedirect = false; svc.UserAgent = Settings.UserAgent; svc.PreAuthenticate = true; svc.Url = Settings.Url; svc.Timeout = System.Threading.Timeout.Infinite; }
private void btnRead_Click(object sender, EventArgs e) { try { //Show clock this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents(); svc.CallMethod();... } catch (Exception ex) { ShowError(ex); } }
private void Main_Closing(object sender, CancelEventArgs e) { svc.Dispose(); }
ANSWER:
It depends on how often you are going to be calling the web service. If you're going to be calling it almost constantly, it would probably be better to use method #2. However, if it's not going to be getting called quite so often, you are better off using method #1, and only instantiating it when you need it. | [
"web-services"
] | 2 | 2 | 773 | 2 | 0 | 2008-08-16T14:58:30.860000 | 2008-08-16T15:04:56.603000 |
13,170 | 13,255 | A ThreadStateException occures when trying to restart a thread | From time to time I get a System.Threading.ThreadStateException when attempting to restart a thread. The code in question is as follows: // Make sure the thread is done stopping while (this.mThread.ThreadState == ThreadState.Running) { Thread.Sleep(0); } // Respawn a thread if the current one is stopped or doesn't exist if (this.mThread == null || this.mThread.ThreadState == ThreadState.Stopped) { this.mThread = new Thread(new ParameterizedThreadStart(Monitor)); } // Start the thread if (check) { this.mThread.Start(60000); } else { this.mThread.Start(0); } So two questions - is this the correct way of doing things, and it is, is there a way to prevent the error from occurring? | The problem is that you have code that first checks if it should create a new thread object, and another piece of code that determines wether to start the thread object. Due to race conditions and similar things, your code might end up trying to call.Start on an existing thread object. Considering you don't post the details behind the check variable, it's impossible to know what might trigger this behavior. You should reorganize your code so that.Start is guaranteed to only be called on new objects. In short, you should put the Start method into the same if-statement as the one that creates a new thread object. Personally, I would try to reorganize the entire code so that I didn't need to create another thread, but wrap the code inside the thread object inside a loop so that the thread just keeps on going. | A ThreadStateException occures when trying to restart a thread From time to time I get a System.Threading.ThreadStateException when attempting to restart a thread. The code in question is as follows: // Make sure the thread is done stopping while (this.mThread.ThreadState == ThreadState.Running) { Thread.Sleep(0); } // Respawn a thread if the current one is stopped or doesn't exist if (this.mThread == null || this.mThread.ThreadState == ThreadState.Stopped) { this.mThread = new Thread(new ParameterizedThreadStart(Monitor)); } // Start the thread if (check) { this.mThread.Start(60000); } else { this.mThread.Start(0); } So two questions - is this the correct way of doing things, and it is, is there a way to prevent the error from occurring? | TITLE:
A ThreadStateException occures when trying to restart a thread
QUESTION:
From time to time I get a System.Threading.ThreadStateException when attempting to restart a thread. The code in question is as follows: // Make sure the thread is done stopping while (this.mThread.ThreadState == ThreadState.Running) { Thread.Sleep(0); } // Respawn a thread if the current one is stopped or doesn't exist if (this.mThread == null || this.mThread.ThreadState == ThreadState.Stopped) { this.mThread = new Thread(new ParameterizedThreadStart(Monitor)); } // Start the thread if (check) { this.mThread.Start(60000); } else { this.mThread.Start(0); } So two questions - is this the correct way of doing things, and it is, is there a way to prevent the error from occurring?
ANSWER:
The problem is that you have code that first checks if it should create a new thread object, and another piece of code that determines wether to start the thread object. Due to race conditions and similar things, your code might end up trying to call.Start on an existing thread object. Considering you don't post the details behind the check variable, it's impossible to know what might trigger this behavior. You should reorganize your code so that.Start is guaranteed to only be called on new objects. In short, you should put the Start method into the same if-statement as the one that creates a new thread object. Personally, I would try to reorganize the entire code so that I didn't need to create another thread, but wrap the code inside the thread object inside a loop so that the thread just keeps on going. | [
"c#",
".net",
"multithreading",
"exception"
] | 6 | 3 | 8,210 | 3 | 0 | 2008-08-16T15:24:19.153000 | 2008-08-16T17:56:47.147000 |
13,200 | 13,318 | How do I integrate my continuous integration system with my bug tracking system? | I use cruisecontrol.rb for CI and FogBugz for bug tracking, but the more general the answers, the better. First is the technical problem: is there an API for FogBugz? Are there good tutorials, or better yet, pre-written code? Second is the procedural problem: what, exactly, should the CI put in the bug tracker when the build breaks? Perhaps: Title: "#{last committer} broke the build!" Body: "#{ error traces }" I suppose this presupposes the answer to this question: should I even put CI breaks into my bug tracking? | All the CI setups I've worked with send an email (to a list), but if you did want—especially if your team uses FogBugz much as a todo system—you could just open a case in FogBugz 6. It has an API that lets you open cases. For that matter, you could just configure it to send the email to your FogBugz' email submission address, but the API might let you do more, like assign the case to the last committer. Brian 's answer suggests to me, if your CI finds a failure in a commit that had a case number, you might even just reopen the existing case. Like codifying a case field for every little thing, though, there's a point where the CI automation could be "too smart," get it wrong, and just be annoying. Opening a new case could be plenty. And thanks: this makes me wonder if I should try integrating our Chimps setup with our FogBugz! | How do I integrate my continuous integration system with my bug tracking system? I use cruisecontrol.rb for CI and FogBugz for bug tracking, but the more general the answers, the better. First is the technical problem: is there an API for FogBugz? Are there good tutorials, or better yet, pre-written code? Second is the procedural problem: what, exactly, should the CI put in the bug tracker when the build breaks? Perhaps: Title: "#{last committer} broke the build!" Body: "#{ error traces }" I suppose this presupposes the answer to this question: should I even put CI breaks into my bug tracking? | TITLE:
How do I integrate my continuous integration system with my bug tracking system?
QUESTION:
I use cruisecontrol.rb for CI and FogBugz for bug tracking, but the more general the answers, the better. First is the technical problem: is there an API for FogBugz? Are there good tutorials, or better yet, pre-written code? Second is the procedural problem: what, exactly, should the CI put in the bug tracker when the build breaks? Perhaps: Title: "#{last committer} broke the build!" Body: "#{ error traces }" I suppose this presupposes the answer to this question: should I even put CI breaks into my bug tracking?
ANSWER:
All the CI setups I've worked with send an email (to a list), but if you did want—especially if your team uses FogBugz much as a todo system—you could just open a case in FogBugz 6. It has an API that lets you open cases. For that matter, you could just configure it to send the email to your FogBugz' email submission address, but the API might let you do more, like assign the case to the last committer. Brian 's answer suggests to me, if your CI finds a failure in a commit that had a case number, you might even just reopen the existing case. Like codifying a case field for every little thing, though, there's a point where the CI automation could be "too smart," get it wrong, and just be annoying. Opening a new case could be plenty. And thanks: this makes me wonder if I should try integrating our Chimps setup with our FogBugz! | [
"continuous-integration",
"bug-tracking",
"fogbugz",
"cruisecontrol.rb"
] | 6 | 3 | 2,127 | 3 | 0 | 2008-08-16T16:09:16.240000 | 2008-08-16T19:56:45.093000 |
13,204 | 13,220 | Why Doesn't My Cron Job Work Properly? | I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP. The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly. This is the cron job: PATH=/usr/bin 10 3 * * * ruby /home/deploy/bin/datadump.rb This is datadump.rb: #!/usr/bin/ruby require 'yaml' require 'logger' require 'rubygems' require 'net/ssh' require 'net/sftp'
APP = '/home/deploy/apps/myapp/current' LOGFILE = '/home/deploy/log/data.log' TIMESTAMP = '%Y%m%d-%H%M' TABLES = 'table1 table2'
log = Logger.new(LOGFILE, 5, 10 * 1024) dump = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz" ftpconfig = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml')) config = YAML::load(open(APP + '/config/database.yml'))['production'] cmd = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"
log.info 'Getting ready to create a backup' `#{cmd}`
# Strongspace log.info 'Backup created, starting the transfer to Strongspace' Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh| ssh.sftp.connect do |sftp| sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle| sftp.write(handle, open("#{dump}").read) end end end log.info 'Finished transferring backup to Strongspace'
log.info 'Removing local file' cmd = "rm -f #{dump}" log.debug "Executing: #{cmd}" `#{cmd}` log.info 'Local file removed' I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment. What could be causing the cron job to fail? | Are you sure the temporary file is being created correctly when running as a cron job? The working directory for your script will either be specified in the HOME environment variable, or the /etc/passwd entry for the user that installed the cron job. If deploy does not have write permissions for the directory in which it is executing, then you could specify an absolute path for the dump file to fix the problem. | Why Doesn't My Cron Job Work Properly? I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP. The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly. This is the cron job: PATH=/usr/bin 10 3 * * * ruby /home/deploy/bin/datadump.rb This is datadump.rb: #!/usr/bin/ruby require 'yaml' require 'logger' require 'rubygems' require 'net/ssh' require 'net/sftp'
APP = '/home/deploy/apps/myapp/current' LOGFILE = '/home/deploy/log/data.log' TIMESTAMP = '%Y%m%d-%H%M' TABLES = 'table1 table2'
log = Logger.new(LOGFILE, 5, 10 * 1024) dump = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz" ftpconfig = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml')) config = YAML::load(open(APP + '/config/database.yml'))['production'] cmd = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"
log.info 'Getting ready to create a backup' `#{cmd}`
# Strongspace log.info 'Backup created, starting the transfer to Strongspace' Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh| ssh.sftp.connect do |sftp| sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle| sftp.write(handle, open("#{dump}").read) end end end log.info 'Finished transferring backup to Strongspace'
log.info 'Removing local file' cmd = "rm -f #{dump}" log.debug "Executing: #{cmd}" `#{cmd}` log.info 'Local file removed' I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment. What could be causing the cron job to fail? | TITLE:
Why Doesn't My Cron Job Work Properly?
QUESTION:
I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP. The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly. This is the cron job: PATH=/usr/bin 10 3 * * * ruby /home/deploy/bin/datadump.rb This is datadump.rb: #!/usr/bin/ruby require 'yaml' require 'logger' require 'rubygems' require 'net/ssh' require 'net/sftp'
APP = '/home/deploy/apps/myapp/current' LOGFILE = '/home/deploy/log/data.log' TIMESTAMP = '%Y%m%d-%H%M' TABLES = 'table1 table2'
log = Logger.new(LOGFILE, 5, 10 * 1024) dump = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz" ftpconfig = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml')) config = YAML::load(open(APP + '/config/database.yml'))['production'] cmd = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"
log.info 'Getting ready to create a backup' `#{cmd}`
# Strongspace log.info 'Backup created, starting the transfer to Strongspace' Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh| ssh.sftp.connect do |sftp| sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle| sftp.write(handle, open("#{dump}").read) end end end log.info 'Finished transferring backup to Strongspace'
log.info 'Removing local file' cmd = "rm -f #{dump}" log.debug "Executing: #{cmd}" `#{cmd}` log.info 'Local file removed' I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment. What could be causing the cron job to fail?
ANSWER:
Are you sure the temporary file is being created correctly when running as a cron job? The working directory for your script will either be specified in the HOME environment variable, or the /etc/passwd entry for the user that installed the cron job. If deploy does not have write permissions for the directory in which it is executing, then you could specify an absolute path for the dump file to fix the problem. | [
"ruby-on-rails",
"ruby",
"linux",
"ubuntu",
"cron"
] | 7 | 2 | 8,685 | 4 | 0 | 2008-08-16T16:15:18.363000 | 2008-08-16T16:34:37.357000 |
13,213 | 13,218 | How to avoid conflict when not using ID in URLs | I see often (rewritten) URLs without ID in it, like on some wordpress installations. What is the best way of achieve this? Example: site.com/product/some-product-name/ Maybe to keep an array of page names and IDs in cache, to avoid DB query on every page request? How to avoid conflicts, and what are other issues on using urls without IDs? | Using an ID presents the same conundrum, really--you're just checking for a different value in your database. The "some-product-name" part of your URL above is also something unique. Some people call them slugs (Wordpress, also permalinks). So instead of querying the database for a row that has the particular ID, you're querying the database for a row that has a particular slug. You don't need to know the ID to retrieve the record. | How to avoid conflict when not using ID in URLs I see often (rewritten) URLs without ID in it, like on some wordpress installations. What is the best way of achieve this? Example: site.com/product/some-product-name/ Maybe to keep an array of page names and IDs in cache, to avoid DB query on every page request? How to avoid conflicts, and what are other issues on using urls without IDs? | TITLE:
How to avoid conflict when not using ID in URLs
QUESTION:
I see often (rewritten) URLs without ID in it, like on some wordpress installations. What is the best way of achieve this? Example: site.com/product/some-product-name/ Maybe to keep an array of page names and IDs in cache, to avoid DB query on every page request? How to avoid conflicts, and what are other issues on using urls without IDs?
ANSWER:
Using an ID presents the same conundrum, really--you're just checking for a different value in your database. The "some-product-name" part of your URL above is also something unique. Some people call them slugs (Wordpress, also permalinks). So instead of querying the database for a row that has the particular ID, you're querying the database for a row that has a particular slug. You don't need to know the ID to retrieve the record. | [
"url",
"url-rewriting"
] | 5 | 3 | 2,599 | 4 | 0 | 2008-08-16T16:24:58.827000 | 2008-08-16T16:32:16.787000 |
13,217 | 16,648 | How do I update my UI from within HttpWebRequest.BeginGetRequestStream in Silverlight | I am uploading multiple files using the BeginGetRequestStream of HttpWebRequest but I want to update the progress control I have written whilst I post up the data stream. How should this be done, I have tried calling Dispatch.BeginInvoke (as below) from within the loop that pushes the data into the stream but it locks the browser until its finished so it seems to be in some sort of worker/ui thread deadlock. This is a code snippet of pretty much what I am doing: class RequestState { public HttpWebRequest request; // holds the request public FileDialogFileInfo file; // store our file stream data
public RequestState( HttpWebRequest request, FileDialogFileInfo file ) { this.request = request; this.file = file; } }
private void UploadFile( FileDialogFileInfo file ) { UriBuilder ub = new UriBuilder( app.receiverURL ); ub.Query = string.Format( "filename={0}", file.Name );
// Open the selected file to read. HttpWebRequest request = (HttpWebRequest)WebRequest.Create( ub.Uri ); request.Method = "POST";
RequestState state = new RequestState( request, file ); request.BeginGetRequestStream( new AsyncCallback( OnUploadReadCallback ), state ); }
private void OnUploadReadCallback( IAsyncResult asynchronousResult ) { RequestState state = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = (HttpWebRequest)state.request;
Stream postStream = request.EndGetRequestStream( asynchronousResult ); PushData( state.file, postStream ); postStream.Close();
state.request.BeginGetResponse( new AsyncCallback( OnUploadResponseCallback ), state.request ); }
private void PushData( FileDialogFileInfo file, Stream output ) { byte[] buffer = new byte[ 4096 ]; int bytesRead = 0;
Stream input = file.OpenRead(); while( ( bytesRead = input.Read( buffer, 0, buffer.Length ) )!= 0 ) { output.Write( buffer, 0, bytesRead ); bytesReadTotal += bytesRead;
App app = App.Current as App; int totalPercentage = Convert.ToInt32( ( bytesReadTotal / app.totalBytesToUpload ) * 100 );
// enabling the following locks up my UI and browser Dispatcher.BeginInvoke( () => { this.ProgressBarWithPercentage.Percentage = totalPercentage; } ); } } | I was going to say that, I didn't think that Silverlight 2's HttpWebRequest supported streaming, because the request data gets buffered into memory entirely. It had been a while since the last time I looked at it though, therefore I went back to see if Beta 2 supported it. Well turns out it does. I am glad I went back and read before stating that. You can enable it by setting AllowReadStreamBuffering to false. Did you set this property on your HttpWebRequest? That could be causing your block. MSDN Reference File upload component for Silverlight and ASP.NET Edit, found another reference for you. You may want to follow this approach by breaking the file into chunks. This was written last March, therefore I am not sure if it will work in Beta 2 or not. | How do I update my UI from within HttpWebRequest.BeginGetRequestStream in Silverlight I am uploading multiple files using the BeginGetRequestStream of HttpWebRequest but I want to update the progress control I have written whilst I post up the data stream. How should this be done, I have tried calling Dispatch.BeginInvoke (as below) from within the loop that pushes the data into the stream but it locks the browser until its finished so it seems to be in some sort of worker/ui thread deadlock. This is a code snippet of pretty much what I am doing: class RequestState { public HttpWebRequest request; // holds the request public FileDialogFileInfo file; // store our file stream data
public RequestState( HttpWebRequest request, FileDialogFileInfo file ) { this.request = request; this.file = file; } }
private void UploadFile( FileDialogFileInfo file ) { UriBuilder ub = new UriBuilder( app.receiverURL ); ub.Query = string.Format( "filename={0}", file.Name );
// Open the selected file to read. HttpWebRequest request = (HttpWebRequest)WebRequest.Create( ub.Uri ); request.Method = "POST";
RequestState state = new RequestState( request, file ); request.BeginGetRequestStream( new AsyncCallback( OnUploadReadCallback ), state ); }
private void OnUploadReadCallback( IAsyncResult asynchronousResult ) { RequestState state = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = (HttpWebRequest)state.request;
Stream postStream = request.EndGetRequestStream( asynchronousResult ); PushData( state.file, postStream ); postStream.Close();
state.request.BeginGetResponse( new AsyncCallback( OnUploadResponseCallback ), state.request ); }
private void PushData( FileDialogFileInfo file, Stream output ) { byte[] buffer = new byte[ 4096 ]; int bytesRead = 0;
Stream input = file.OpenRead(); while( ( bytesRead = input.Read( buffer, 0, buffer.Length ) )!= 0 ) { output.Write( buffer, 0, bytesRead ); bytesReadTotal += bytesRead;
App app = App.Current as App; int totalPercentage = Convert.ToInt32( ( bytesReadTotal / app.totalBytesToUpload ) * 100 );
// enabling the following locks up my UI and browser Dispatcher.BeginInvoke( () => { this.ProgressBarWithPercentage.Percentage = totalPercentage; } ); } } | TITLE:
How do I update my UI from within HttpWebRequest.BeginGetRequestStream in Silverlight
QUESTION:
I am uploading multiple files using the BeginGetRequestStream of HttpWebRequest but I want to update the progress control I have written whilst I post up the data stream. How should this be done, I have tried calling Dispatch.BeginInvoke (as below) from within the loop that pushes the data into the stream but it locks the browser until its finished so it seems to be in some sort of worker/ui thread deadlock. This is a code snippet of pretty much what I am doing: class RequestState { public HttpWebRequest request; // holds the request public FileDialogFileInfo file; // store our file stream data
public RequestState( HttpWebRequest request, FileDialogFileInfo file ) { this.request = request; this.file = file; } }
private void UploadFile( FileDialogFileInfo file ) { UriBuilder ub = new UriBuilder( app.receiverURL ); ub.Query = string.Format( "filename={0}", file.Name );
// Open the selected file to read. HttpWebRequest request = (HttpWebRequest)WebRequest.Create( ub.Uri ); request.Method = "POST";
RequestState state = new RequestState( request, file ); request.BeginGetRequestStream( new AsyncCallback( OnUploadReadCallback ), state ); }
private void OnUploadReadCallback( IAsyncResult asynchronousResult ) { RequestState state = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = (HttpWebRequest)state.request;
Stream postStream = request.EndGetRequestStream( asynchronousResult ); PushData( state.file, postStream ); postStream.Close();
state.request.BeginGetResponse( new AsyncCallback( OnUploadResponseCallback ), state.request ); }
private void PushData( FileDialogFileInfo file, Stream output ) { byte[] buffer = new byte[ 4096 ]; int bytesRead = 0;
Stream input = file.OpenRead(); while( ( bytesRead = input.Read( buffer, 0, buffer.Length ) )!= 0 ) { output.Write( buffer, 0, bytesRead ); bytesReadTotal += bytesRead;
App app = App.Current as App; int totalPercentage = Convert.ToInt32( ( bytesReadTotal / app.totalBytesToUpload ) * 100 );
// enabling the following locks up my UI and browser Dispatcher.BeginInvoke( () => { this.ProgressBarWithPercentage.Percentage = totalPercentage; } ); } }
ANSWER:
I was going to say that, I didn't think that Silverlight 2's HttpWebRequest supported streaming, because the request data gets buffered into memory entirely. It had been a while since the last time I looked at it though, therefore I went back to see if Beta 2 supported it. Well turns out it does. I am glad I went back and read before stating that. You can enable it by setting AllowReadStreamBuffering to false. Did you set this property on your HttpWebRequest? That could be causing your block. MSDN Reference File upload component for Silverlight and ASP.NET Edit, found another reference for you. You may want to follow this approach by breaking the file into chunks. This was written last March, therefore I am not sure if it will work in Beta 2 or not. | [
"c#",
"silverlight"
] | 5 | 1 | 4,508 | 2 | 0 | 2008-08-16T16:32:06.257000 | 2008-08-19T18:09:03.233000 |
13,224 | 13,228 | Mobile device is detected as non mobile device | I've included a mobile web form in my asp.net project, I thought that it could/should be seen just for my mobile users but I realize that it can also be seen from any browser, I don't see problem there cause I could diff the access using HttpBrowserCapabilities.IsMobileDevice=true and transferring to the appropiate aspx page, but it results that when I access to the web form from my mobile device it is identified as IsMobileDevice = false and sends me to another page. How could it be possible that? The mobile device runs Pocket PC 2003. | Some are not recognized, because the UserAgent has been messed with or a new browser is being used. Such as Opera Mobile 9.5. To fix this you need to create a Browser (*.browser) file specifically for defining this. I had to do it for the new Mozilla based UserAgent that is being sent from Google. | Mobile device is detected as non mobile device I've included a mobile web form in my asp.net project, I thought that it could/should be seen just for my mobile users but I realize that it can also be seen from any browser, I don't see problem there cause I could diff the access using HttpBrowserCapabilities.IsMobileDevice=true and transferring to the appropiate aspx page, but it results that when I access to the web form from my mobile device it is identified as IsMobileDevice = false and sends me to another page. How could it be possible that? The mobile device runs Pocket PC 2003. | TITLE:
Mobile device is detected as non mobile device
QUESTION:
I've included a mobile web form in my asp.net project, I thought that it could/should be seen just for my mobile users but I realize that it can also be seen from any browser, I don't see problem there cause I could diff the access using HttpBrowserCapabilities.IsMobileDevice=true and transferring to the appropiate aspx page, but it results that when I access to the web form from my mobile device it is identified as IsMobileDevice = false and sends me to another page. How could it be possible that? The mobile device runs Pocket PC 2003.
ANSWER:
Some are not recognized, because the UserAgent has been messed with or a new browser is being used. Such as Opera Mobile 9.5. To fix this you need to create a Browser (*.browser) file specifically for defining this. I had to do it for the new Mozilla based UserAgent that is being sent from Google. | [
"asp.net",
"mobile",
"mobile-website"
] | 6 | 2 | 5,600 | 3 | 0 | 2008-08-16T16:42:27.827000 | 2008-08-16T16:46:28.270000 |
13,225 | 13,381 | How can I refactor HTML markup out of my property files? | I've recently inherited a internationalized and text-heavy Struts 1.1 web application. Many of the JSP files look like: and the properties files look like: messages.properties alert=Please update your address and contact information. with the appropriate translations in N other languages (messages_fr.properties, etc). Problems: DRY violation - I have N references to my Struts action URLs instead of 1, which makes refactoring action URLs error-prone. Mixed concerns - My application's markup is now in more than just my JSP files, making it difficult for a web specialist to tweak the markup (using CSS, etc). Post-translation markup - Anytime I receive newly-translated text, I must decide what to surround with the... markup. Easy for English but less so for unfamiliar languages. I've considered adding placeholders in the messages file, like: alert=Please update your {0} and {1}. but then the words "address" and "contact information" would somehow need to be localized, wrapped with markup, and passed to my message tag - and I can't see an easy way to do it. What can I do to improve this? | Avoid creating links within long blocks of text. Prefer shorter text that can act as a logically complete and independent link. Generally, it will lead to fewer problems. Sometimes you have to compromise your UI design to accommodate localization; sometimes you need to compromise your localization process to accommodate the UI. Any time a developer manually manipulates post-translation strings is a source of potentially expensive bugs. Cutting/pasting or string editing can result in character corruption, misplaced strings, etc. A translation defect needs the participation of outside parties to fix which involves cost and takes time. Thinking on it, something like this might be less ugly: Please update your address and contact information. update address update contact information...but I'm no UI designer. | How can I refactor HTML markup out of my property files? I've recently inherited a internationalized and text-heavy Struts 1.1 web application. Many of the JSP files look like: and the properties files look like: messages.properties alert=Please update your address and contact information. with the appropriate translations in N other languages (messages_fr.properties, etc). Problems: DRY violation - I have N references to my Struts action URLs instead of 1, which makes refactoring action URLs error-prone. Mixed concerns - My application's markup is now in more than just my JSP files, making it difficult for a web specialist to tweak the markup (using CSS, etc). Post-translation markup - Anytime I receive newly-translated text, I must decide what to surround with the... markup. Easy for English but less so for unfamiliar languages. I've considered adding placeholders in the messages file, like: alert=Please update your {0} and {1}. but then the words "address" and "contact information" would somehow need to be localized, wrapped with markup, and passed to my message tag - and I can't see an easy way to do it. What can I do to improve this? | TITLE:
How can I refactor HTML markup out of my property files?
QUESTION:
I've recently inherited a internationalized and text-heavy Struts 1.1 web application. Many of the JSP files look like: and the properties files look like: messages.properties alert=Please update your address and contact information. with the appropriate translations in N other languages (messages_fr.properties, etc). Problems: DRY violation - I have N references to my Struts action URLs instead of 1, which makes refactoring action URLs error-prone. Mixed concerns - My application's markup is now in more than just my JSP files, making it difficult for a web specialist to tweak the markup (using CSS, etc). Post-translation markup - Anytime I receive newly-translated text, I must decide what to surround with the... markup. Easy for English but less so for unfamiliar languages. I've considered adding placeholders in the messages file, like: alert=Please update your {0} and {1}. but then the words "address" and "contact information" would somehow need to be localized, wrapped with markup, and passed to my message tag - and I can't see an easy way to do it. What can I do to improve this?
ANSWER:
Avoid creating links within long blocks of text. Prefer shorter text that can act as a logically complete and independent link. Generally, it will lead to fewer problems. Sometimes you have to compromise your UI design to accommodate localization; sometimes you need to compromise your localization process to accommodate the UI. Any time a developer manually manipulates post-translation strings is a source of potentially expensive bugs. Cutting/pasting or string editing can result in character corruption, misplaced strings, etc. A translation defect needs the participation of outside parties to fix which involves cost and takes time. Thinking on it, something like this might be less ugly: Please update your address and contact information. update address update contact information...but I'm no UI designer. | [
"java",
"jsp",
"internationalization",
"struts"
] | 9 | 2 | 2,781 | 4 | 0 | 2008-08-16T16:42:50.070000 | 2008-08-16T21:59:07.157000 |
13,279 | 47,187 | Any good resources or advice for working with languages with different orientations? (such as Japanese or Chinese) | We have an enterprise web application where every bit of text in the system is localised to the user's browser's culture setting. So far we have only supported English, American (similar but mis-spelt;-) and French (for the Canadian Gov't - app in English or French depending on user preference). During development we also had some European languages in mind like Dutch and German that tend to concatenate words into very long ones. We're currently investigating support for eastern languages: Chinese, Japanese, and so on. I understand that these use phonetic input converted to written characters. How does that work on the web? Do the same events fire while inputs and textareas are being edited (we're quite Ajax heavy). What conventions do users of these top-down languages expect online? What effect does their dual-input (phonetic typing + conversion) have on web controls? With RTL languages like Arabic do users expect the entire interface to be mirrored? For instance should things like OK/Cancel buttons be swapped and on the left? | Read Globalization Step-by-Step by Microsoft. I can answer the specifics on CJKV, but you probably want a book on this topic. I haven't read it but CJKV Information Processing is from O'Reilly (2nd ed due Dec, 2008). I understand that these use phonetic input converted to written characters. How does that work on the web? The input is done by a class of software called an IME (Input Method Editor) on Windows, Mac, and Linux (e.g. SCIM). When an IME is turned on, the input from the keyboard first goes to the IME, and the user gets to pick the correct kanji/hiragana combo. When the user commits by hitting return key, the IME types in the kanji/hiragana into the web browser using the current encoding. Encoding situation was a big mess, but if you are writing a web app, go with an encoding of Unicode. I suggest UTF-8. Do the same events fire while inputs and textareas are being edited? A Unicode savvy web browser and OS combo handles multiple languages. For example, one can use English normal version of Firefox to browse and post to a Japanese website. From the browsers point of view, it's just an array of "bla bla bla" in Unicode. In other words, if the event fires up in English, the same event should fire up in CJKV if you use a Unicode variant. What conventions do users of these top-down languages expect online? CJKV readers expect left-to-right online. Math and science textbooks are written from left-to-right. Most word processors, including localized version of Word, write left-to-right. What effect does their dual-input (phonetic typing + conversion) have on web controls? For the most part you should not have to worry about it, unless you are trapping keyboard events. For example, I hate using Japanese keyboard with bunch of extra keyboard. So, when I have to assign IME on/off command to some key on US keyboard. I personally use right-Alt. Also, spacebar and enter key is used during conversion, but not sure if these events are passed to browser. | Any good resources or advice for working with languages with different orientations? (such as Japanese or Chinese) We have an enterprise web application where every bit of text in the system is localised to the user's browser's culture setting. So far we have only supported English, American (similar but mis-spelt;-) and French (for the Canadian Gov't - app in English or French depending on user preference). During development we also had some European languages in mind like Dutch and German that tend to concatenate words into very long ones. We're currently investigating support for eastern languages: Chinese, Japanese, and so on. I understand that these use phonetic input converted to written characters. How does that work on the web? Do the same events fire while inputs and textareas are being edited (we're quite Ajax heavy). What conventions do users of these top-down languages expect online? What effect does their dual-input (phonetic typing + conversion) have on web controls? With RTL languages like Arabic do users expect the entire interface to be mirrored? For instance should things like OK/Cancel buttons be swapped and on the left? | TITLE:
Any good resources or advice for working with languages with different orientations? (such as Japanese or Chinese)
QUESTION:
We have an enterprise web application where every bit of text in the system is localised to the user's browser's culture setting. So far we have only supported English, American (similar but mis-spelt;-) and French (for the Canadian Gov't - app in English or French depending on user preference). During development we also had some European languages in mind like Dutch and German that tend to concatenate words into very long ones. We're currently investigating support for eastern languages: Chinese, Japanese, and so on. I understand that these use phonetic input converted to written characters. How does that work on the web? Do the same events fire while inputs and textareas are being edited (we're quite Ajax heavy). What conventions do users of these top-down languages expect online? What effect does their dual-input (phonetic typing + conversion) have on web controls? With RTL languages like Arabic do users expect the entire interface to be mirrored? For instance should things like OK/Cancel buttons be swapped and on the left?
ANSWER:
Read Globalization Step-by-Step by Microsoft. I can answer the specifics on CJKV, but you probably want a book on this topic. I haven't read it but CJKV Information Processing is from O'Reilly (2nd ed due Dec, 2008). I understand that these use phonetic input converted to written characters. How does that work on the web? The input is done by a class of software called an IME (Input Method Editor) on Windows, Mac, and Linux (e.g. SCIM). When an IME is turned on, the input from the keyboard first goes to the IME, and the user gets to pick the correct kanji/hiragana combo. When the user commits by hitting return key, the IME types in the kanji/hiragana into the web browser using the current encoding. Encoding situation was a big mess, but if you are writing a web app, go with an encoding of Unicode. I suggest UTF-8. Do the same events fire while inputs and textareas are being edited? A Unicode savvy web browser and OS combo handles multiple languages. For example, one can use English normal version of Firefox to browse and post to a Japanese website. From the browsers point of view, it's just an array of "bla bla bla" in Unicode. In other words, if the event fires up in English, the same event should fire up in CJKV if you use a Unicode variant. What conventions do users of these top-down languages expect online? CJKV readers expect left-to-right online. Math and science textbooks are written from left-to-right. Most word processors, including localized version of Word, write left-to-right. What effect does their dual-input (phonetic typing + conversion) have on web controls? For the most part you should not have to worry about it, unless you are trapping keyboard events. For example, I hate using Japanese keyboard with bunch of extra keyboard. So, when I have to assign IME on/off command to some key on US keyboard. I personally use right-Alt. Also, spacebar and enter key is used during conversion, but not sure if these events are passed to browser. | [
"internationalization",
"multilingual"
] | 9 | 6 | 527 | 3 | 0 | 2008-08-16T18:57:53.887000 | 2008-09-06T02:04:37.263000 |
13,293 | 13,369 | How can I determine CodeIgniter speed? | I am thinking of using a PHP framework called CodeIgniter. One of the things I am interested in is its speed. I have, however, no way to find out how fast it is, and would rather not simply take the word of their website for it. Does anybody know how I can determine its speed myself, or can someone tell me of a site that can? | Code Igniter also has some built-in benchmarking tools: http://codeigniter.com/user_guide/general/profiling.html | How can I determine CodeIgniter speed? I am thinking of using a PHP framework called CodeIgniter. One of the things I am interested in is its speed. I have, however, no way to find out how fast it is, and would rather not simply take the word of their website for it. Does anybody know how I can determine its speed myself, or can someone tell me of a site that can? | TITLE:
How can I determine CodeIgniter speed?
QUESTION:
I am thinking of using a PHP framework called CodeIgniter. One of the things I am interested in is its speed. I have, however, no way to find out how fast it is, and would rather not simply take the word of their website for it. Does anybody know how I can determine its speed myself, or can someone tell me of a site that can?
ANSWER:
Code Igniter also has some built-in benchmarking tools: http://codeigniter.com/user_guide/general/profiling.html | [
"php",
"performance",
"codeigniter",
"benchmarking"
] | 19 | 17 | 7,145 | 7 | 0 | 2008-08-16T19:27:20.473000 | 2008-08-16T21:19:51.167000 |
13,299 | 136,925 | Image manipulation in asp.net/c# without System.Drawing/GDI+ | Is there any alternative image manipulation library for.net? I would prefer something that is managed and open source. I ask this because of two reasons: I have encountered hard to debug GDI+ errors with System.Drawing in the past I have read that using System.Drawing in asp.net web applications is not 100% supported. Thanks! edit: clarification, I know that System.Drawing can work asp.net web apps - I have used it in the past. I really just wonder if there are any managed image manipulation libraries for.net:) | I don't know of any fully-managed 2D drawing libraries that are either free or open-source (there appears to be a few commercially available, but OSS is the way to go). However, you might look into the Mono bindings to Cairo. Cairo is a platform independent 2D drawing API. You can find more information about it at the Cairo homepage. The Cairo Wikipedia page also has some good info. Cairo is also used fairly widely in the Open Source world, which to me says something about its robustness. Mozilla, Webkit, and Mono all use it, among others. Ironically, Mono actually uses it to back their System.Drawing implementation... go figure. There might also be a way to use Mono's System.Drawing implementation as a drop-in replacement for the Microsoft implementation, though I'm not sure how or if that would even work. I would probably start by replacing the System.Drawing.dll reference with Mono's version, and then try to deal with any errors. | Image manipulation in asp.net/c# without System.Drawing/GDI+ Is there any alternative image manipulation library for.net? I would prefer something that is managed and open source. I ask this because of two reasons: I have encountered hard to debug GDI+ errors with System.Drawing in the past I have read that using System.Drawing in asp.net web applications is not 100% supported. Thanks! edit: clarification, I know that System.Drawing can work asp.net web apps - I have used it in the past. I really just wonder if there are any managed image manipulation libraries for.net:) | TITLE:
Image manipulation in asp.net/c# without System.Drawing/GDI+
QUESTION:
Is there any alternative image manipulation library for.net? I would prefer something that is managed and open source. I ask this because of two reasons: I have encountered hard to debug GDI+ errors with System.Drawing in the past I have read that using System.Drawing in asp.net web applications is not 100% supported. Thanks! edit: clarification, I know that System.Drawing can work asp.net web apps - I have used it in the past. I really just wonder if there are any managed image manipulation libraries for.net:)
ANSWER:
I don't know of any fully-managed 2D drawing libraries that are either free or open-source (there appears to be a few commercially available, but OSS is the way to go). However, you might look into the Mono bindings to Cairo. Cairo is a platform independent 2D drawing API. You can find more information about it at the Cairo homepage. The Cairo Wikipedia page also has some good info. Cairo is also used fairly widely in the Open Source world, which to me says something about its robustness. Mozilla, Webkit, and Mono all use it, among others. Ironically, Mono actually uses it to back their System.Drawing implementation... go figure. There might also be a way to use Mono's System.Drawing implementation as a drop-in replacement for the Microsoft implementation, though I'm not sure how or if that would even work. I would probably start by replacing the System.Drawing.dll reference with Mono's version, and then try to deal with any errors. | [
"asp.net",
"system.drawing"
] | 12 | 2 | 6,856 | 4 | 0 | 2008-08-16T19:42:42.937000 | 2008-09-25T23:41:34.683000 |
13,343 | 13,370 | How much of your work day is spent coding? | I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software. When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours? Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are? What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity? What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day? Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it? | I'm a corporate developer, the kind Joel Spolsky called "depressed" in a couple of the StackOverflow podcasts. Because my company is not a software company it has little business reason to implement many of the measures software experts recommend companies engage for developer productivity. We don't get private offices and dual 30 inch monitors. Our source control system is Microsoft Visual Source Safe. Enough said. On the other hand, I get to do a lot of things that fill out my day and add some variety to my job. I get involved in business analysis, project management, development, production support, international implementations, training support, team planning, and process improvement. I'd say I get 85% of my day to code, when I can focus and I have a major programming task. But more often I get about 50% of my day for coding. If production support (non coding-related) is heavy I may only get 15% of my day to code. Most of the companies I've worked for were not actively engaged in evaluating agile processes or test-driven development, but they didn't do a good job of waterfall either; most of their developers worked like cut-and-paste cowboys with impugnity. On occasion I do work from home and with kids, it's horrible. I'm more productive at work. My productivity is good, but could be better if the interruption factor and cost of mental context switching was removed. Production support and project management overhead both create those types of interruptions. But both are necessary parts of the job, so I don't think I can get rid of them. What I would like to consider is a restructuring of the team so that people on projects could focus on projects while the others could block the interruptions by being dedicated to support. And then swapping when the project is over. Unfortunately, no one wants to do support, so the other productivity improvement measure I'd wish for would be one of the following: Better testing tools/methodologies to speed up unit testing Better business analysis tools/skills to improve the quality of new development and limit its contributions to the production support load | How much of your work day is spent coding? I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software. When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours? Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are? What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity? What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day? Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it? | TITLE:
How much of your work day is spent coding?
QUESTION:
I've been thinking about software estimation lately, and I have a bunch of questions around time spent coding. I'm curious to hear from people who have had at least a couple years of experience developing software. When you have to estimate the amount of time you'll spend working on something, how many hours of the day do you spend coding? What occupies the other non-coding hours? Do you find you spend more or less hours than your teammates coding? Do you feel like you're getting more or less work done than they are? What are your work conditions like? Private office, shared office, team room? Coding alone or as a pair? How has your working condition changed the amount of time you spend coding each day? If you can work from home, does that help or hurt your productivity? What development methodology do you use? Waterfall? Agile? Has changing from one methodology to another had an impact on your coding hours per day? Most importantly: Are you happy with your productivity? If not, what single change would you make that would have the most impact on it?
ANSWER:
I'm a corporate developer, the kind Joel Spolsky called "depressed" in a couple of the StackOverflow podcasts. Because my company is not a software company it has little business reason to implement many of the measures software experts recommend companies engage for developer productivity. We don't get private offices and dual 30 inch monitors. Our source control system is Microsoft Visual Source Safe. Enough said. On the other hand, I get to do a lot of things that fill out my day and add some variety to my job. I get involved in business analysis, project management, development, production support, international implementations, training support, team planning, and process improvement. I'd say I get 85% of my day to code, when I can focus and I have a major programming task. But more often I get about 50% of my day for coding. If production support (non coding-related) is heavy I may only get 15% of my day to code. Most of the companies I've worked for were not actively engaged in evaluating agile processes or test-driven development, but they didn't do a good job of waterfall either; most of their developers worked like cut-and-paste cowboys with impugnity. On occasion I do work from home and with kids, it's horrible. I'm more productive at work. My productivity is good, but could be better if the interruption factor and cost of mental context switching was removed. Production support and project management overhead both create those types of interruptions. But both are necessary parts of the job, so I don't think I can get rid of them. What I would like to consider is a restructuring of the team so that people on projects could focus on projects while the others could block the interruptions by being dedicated to support. And then swapping when the project is over. Unfortunately, no one wants to do support, so the other productivity improvement measure I'd wish for would be one of the following: Better testing tools/methodologies to speed up unit testing Better business analysis tools/skills to improve the quality of new development and limit its contributions to the production support load | [
"estimation",
"time-management"
] | 28 | 21 | 11,843 | 9 | 0 | 2008-08-16T20:30:56.273000 | 2008-08-16T21:25:03.607000 |
13,345 | 13,351 | Firefox plugin - sockets | I've always wanted a way to make a socket connection to a server and allow the server to manipulate the page DOM. For example, this could be used in a stock quotes page, so the server can push new quotes as they become available. I know this is a classic limitation (feature?) of HTTP's request/response protocol, but I think this could be implemented as a Firefox plugin (cross-browser compatibility is not important for my application). Java/Flash solutions are not acceptable, because (as far as i know) they live in a box and can't interact with the DOM. Can anyone confirm whether this is within the ability of a Firefox plugin? Has someone already created this or something similar? | You may want to look at Comet which is a fancy name for a long running HTTP connection where the server can push updates to the page. | Firefox plugin - sockets I've always wanted a way to make a socket connection to a server and allow the server to manipulate the page DOM. For example, this could be used in a stock quotes page, so the server can push new quotes as they become available. I know this is a classic limitation (feature?) of HTTP's request/response protocol, but I think this could be implemented as a Firefox plugin (cross-browser compatibility is not important for my application). Java/Flash solutions are not acceptable, because (as far as i know) they live in a box and can't interact with the DOM. Can anyone confirm whether this is within the ability of a Firefox plugin? Has someone already created this or something similar? | TITLE:
Firefox plugin - sockets
QUESTION:
I've always wanted a way to make a socket connection to a server and allow the server to manipulate the page DOM. For example, this could be used in a stock quotes page, so the server can push new quotes as they become available. I know this is a classic limitation (feature?) of HTTP's request/response protocol, but I think this could be implemented as a Firefox plugin (cross-browser compatibility is not important for my application). Java/Flash solutions are not acceptable, because (as far as i know) they live in a box and can't interact with the DOM. Can anyone confirm whether this is within the ability of a Firefox plugin? Has someone already created this or something similar?
ANSWER:
You may want to look at Comet which is a fancy name for a long running HTTP connection where the server can push updates to the page. | [
"firefox",
"dom",
"sockets",
"plugins"
] | 4 | 2 | 4,543 | 4 | 0 | 2008-08-16T20:31:44.867000 | 2008-08-16T20:47:26.553000 |
13,347 | 13,375 | Developing for multiple monitors | We are currently working on a new version of our main application. one thing that I really wish to work on is providing support for multiple monitors. Increasingly, our target users are adding second screens to their desktops and I think our product could leverage this extra space to improve user performance. Our application is a financial package that supports leasing and fleet companies - a very specialised market. That being said, I am sure that many people with multiple monitors have a favourite bit of software that they think would be improved if it supported those extra screens better. I'm looking for some opinions on those niggles that you have with current software, and how you think they could be improved to support multi-monitor setups. My aim is to then review these and decide how I can implement them and, hopefully, provide an even better environment for my users. Your help is appreciated. Thankyou. | Few random tips: If multiple windows can be open at one time, allow users to have them on separate screens. Seems obvious, but some very popular apps (e.g. Visual Studio) fail miserably at this. Remember the position of the last opened window, and open new windows on the same screen as before. However, sometimes users switch between multiple and single-display (e.g. docking a laptop with an external CRT), so watch cover this case as well. Consider how your particular users work, and how having two maximized windows simultaneously might help. Often, there is a (fairly passive) window for reference (e.g. a web browser/help) and an active window for data entry (e.g. editor/database) that users switch between. Do not put toolboxes/toolbars on a different window than objects they operate on (it's inconvenient to move the mouse so far). | Developing for multiple monitors We are currently working on a new version of our main application. one thing that I really wish to work on is providing support for multiple monitors. Increasingly, our target users are adding second screens to their desktops and I think our product could leverage this extra space to improve user performance. Our application is a financial package that supports leasing and fleet companies - a very specialised market. That being said, I am sure that many people with multiple monitors have a favourite bit of software that they think would be improved if it supported those extra screens better. I'm looking for some opinions on those niggles that you have with current software, and how you think they could be improved to support multi-monitor setups. My aim is to then review these and decide how I can implement them and, hopefully, provide an even better environment for my users. Your help is appreciated. Thankyou. | TITLE:
Developing for multiple monitors
QUESTION:
We are currently working on a new version of our main application. one thing that I really wish to work on is providing support for multiple monitors. Increasingly, our target users are adding second screens to their desktops and I think our product could leverage this extra space to improve user performance. Our application is a financial package that supports leasing and fleet companies - a very specialised market. That being said, I am sure that many people with multiple monitors have a favourite bit of software that they think would be improved if it supported those extra screens better. I'm looking for some opinions on those niggles that you have with current software, and how you think they could be improved to support multi-monitor setups. My aim is to then review these and decide how I can implement them and, hopefully, provide an even better environment for my users. Your help is appreciated. Thankyou.
ANSWER:
Few random tips: If multiple windows can be open at one time, allow users to have them on separate screens. Seems obvious, but some very popular apps (e.g. Visual Studio) fail miserably at this. Remember the position of the last opened window, and open new windows on the same screen as before. However, sometimes users switch between multiple and single-display (e.g. docking a laptop with an external CRT), so watch cover this case as well. Consider how your particular users work, and how having two maximized windows simultaneously might help. Often, there is a (fairly passive) window for reference (e.g. a web browser/help) and an active window for data entry (e.g. editor/database) that users switch between. Do not put toolboxes/toolbars on a different window than objects they operate on (it's inconvenient to move the mouse so far). | [
"user-interface",
"hardware",
"monitor",
"environment"
] | 9 | 14 | 1,531 | 7 | 0 | 2008-08-16T20:37:09.053000 | 2008-08-16T21:37:14.360000 |
13,348 | 13,352 | What are the advantages of using a single database for EACH client? | In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture? I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage. I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision... I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used. Any input is greatly appreciated! | Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious. In this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client. You also get benefits of separability - it's trivial to pull out the data associated with a given client,and move them to a different server. Or restore a backup of that client when the call up to say "We've deleted some key data!", using the builtin database mechanisms. You get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines. You get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database. I can think of a few dozen more, I guess. But all in all, the key concept is "simplicity". The product manages one client, and thus one database. There is never any complexity from the "But the database also contains other clients" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client? | What are the advantages of using a single database for EACH client? In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture? I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage. I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision... I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used. Any input is greatly appreciated! | TITLE:
What are the advantages of using a single database for EACH client?
QUESTION:
In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture? I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage. I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision... I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used. Any input is greatly appreciated!
ANSWER:
Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious. In this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client. You also get benefits of separability - it's trivial to pull out the data associated with a given client,and move them to a different server. Or restore a backup of that client when the call up to say "We've deleted some key data!", using the builtin database mechanisms. You get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines. You get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database. I can think of a few dozen more, I guess. But all in all, the key concept is "simplicity". The product manages one client, and thus one database. There is never any complexity from the "But the database also contains other clients" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client? | [
"database",
"database-design",
"multi-tenant"
] | 68 | 53 | 21,273 | 8 | 0 | 2008-08-16T20:39:26.070000 | 2008-08-16T20:48:00.403000 |
13,353 | 13,358 | Override tab behavior in WinForms | I have a UserControl that consists of three TextBoxes. On a form I can have one or more or my UserControl. I want to implement my own tab behavior so if the user presses Tab in the second TextBox I should only move to the third TextBox if the the second TextBox has anything entered. If nothing is entered in the second TextBox the next control of the form should get focus as per the normal tab behavior. If the user hasn't entered anything in the first or second TextBox and the presses tab there is this special case where a control on the form should be skipped. By using the ProcessDialogKey I have managed to get it work kind of ok but I still have one problem. My question is if there is a way to detect how a WinForms control got focus since I would also like to know if the my UserControl got focus from a Tab or Shift-Tab and then do my weird stuff but if the user clicks the control I don't want to do anything special. | As a general rule, I would say overriding the standard behavior of the TAB key would be a bad idea. Maybe you can do something like disabling the 3rd text box until a valid entry is made in the 2nd text box. Now, having said this, I've also broken this rule at the request of the customer. We made the enter key function like the tab key, where the enter key would save the value in a text field, and advance the cursor to the next field. | Override tab behavior in WinForms I have a UserControl that consists of three TextBoxes. On a form I can have one or more or my UserControl. I want to implement my own tab behavior so if the user presses Tab in the second TextBox I should only move to the third TextBox if the the second TextBox has anything entered. If nothing is entered in the second TextBox the next control of the form should get focus as per the normal tab behavior. If the user hasn't entered anything in the first or second TextBox and the presses tab there is this special case where a control on the form should be skipped. By using the ProcessDialogKey I have managed to get it work kind of ok but I still have one problem. My question is if there is a way to detect how a WinForms control got focus since I would also like to know if the my UserControl got focus from a Tab or Shift-Tab and then do my weird stuff but if the user clicks the control I don't want to do anything special. | TITLE:
Override tab behavior in WinForms
QUESTION:
I have a UserControl that consists of three TextBoxes. On a form I can have one or more or my UserControl. I want to implement my own tab behavior so if the user presses Tab in the second TextBox I should only move to the third TextBox if the the second TextBox has anything entered. If nothing is entered in the second TextBox the next control of the form should get focus as per the normal tab behavior. If the user hasn't entered anything in the first or second TextBox and the presses tab there is this special case where a control on the form should be skipped. By using the ProcessDialogKey I have managed to get it work kind of ok but I still have one problem. My question is if there is a way to detect how a WinForms control got focus since I would also like to know if the my UserControl got focus from a Tab or Shift-Tab and then do my weird stuff but if the user clicks the control I don't want to do anything special.
ANSWER:
As a general rule, I would say overriding the standard behavior of the TAB key would be a bad idea. Maybe you can do something like disabling the 3rd text box until a valid entry is made in the 2nd text box. Now, having said this, I've also broken this rule at the request of the customer. We made the enter key function like the tab key, where the enter key would save the value in a text field, and advance the cursor to the next field. | [
"c#",
".net",
"winforms"
] | 5 | 3 | 3,630 | 5 | 0 | 2008-08-16T20:48:00.590000 | 2008-08-16T20:55:29.483000 |
13,362 | 13,365 | Scrolling Overflowed DIVs with JavaScript | I've got a div that uses overflow:auto to keep the contents inside the div as it is resized and dragged around the page. I'm using some ajax to retrieve lines of text from the server, then append them to the end of the div, so the content is growing downwards. Every time this happens, I'd like to use JS to scroll the div to the bottom so the most recently added content is visible, similar to the way a chat room or command line console would work. So far I've been using this snippet to do it (I'm also using jQuery, hence the $() function): $("#thediv").scrollTop = $("#thediv").scrollHeight; However it's been giving me inconsistent results. Sometimes it works, sometimes not, and it completely ceases to work if the user ever resizes the div or moves the scroll bar manually. The target browser is Firefox 3, and it's being deployed in a controlled environment so it doesn't need to work in IE at all. Any ideas guys? This one's got me stumped. Thanks! | scrollHeight should be the total height of content. scrollTop specifies the pixel offset into that content to be displayed at the top of the element's client area. So you really want (still using jQuery): $("#thediv").each( function() { // certain browsers have a bug such that scrollHeight is too small // when content does not fill the client area of the element var scrollHeight = Math.max(this.scrollHeight, this.clientHeight); this.scrollTop = scrollHeight - this.clientHeight; });...which will set the scroll offset to the last clientHeight worth of content. | Scrolling Overflowed DIVs with JavaScript I've got a div that uses overflow:auto to keep the contents inside the div as it is resized and dragged around the page. I'm using some ajax to retrieve lines of text from the server, then append them to the end of the div, so the content is growing downwards. Every time this happens, I'd like to use JS to scroll the div to the bottom so the most recently added content is visible, similar to the way a chat room or command line console would work. So far I've been using this snippet to do it (I'm also using jQuery, hence the $() function): $("#thediv").scrollTop = $("#thediv").scrollHeight; However it's been giving me inconsistent results. Sometimes it works, sometimes not, and it completely ceases to work if the user ever resizes the div or moves the scroll bar manually. The target browser is Firefox 3, and it's being deployed in a controlled environment so it doesn't need to work in IE at all. Any ideas guys? This one's got me stumped. Thanks! | TITLE:
Scrolling Overflowed DIVs with JavaScript
QUESTION:
I've got a div that uses overflow:auto to keep the contents inside the div as it is resized and dragged around the page. I'm using some ajax to retrieve lines of text from the server, then append them to the end of the div, so the content is growing downwards. Every time this happens, I'd like to use JS to scroll the div to the bottom so the most recently added content is visible, similar to the way a chat room or command line console would work. So far I've been using this snippet to do it (I'm also using jQuery, hence the $() function): $("#thediv").scrollTop = $("#thediv").scrollHeight; However it's been giving me inconsistent results. Sometimes it works, sometimes not, and it completely ceases to work if the user ever resizes the div or moves the scroll bar manually. The target browser is Firefox 3, and it's being deployed in a controlled environment so it doesn't need to work in IE at all. Any ideas guys? This one's got me stumped. Thanks!
ANSWER:
scrollHeight should be the total height of content. scrollTop specifies the pixel offset into that content to be displayed at the top of the element's client area. So you really want (still using jQuery): $("#thediv").each( function() { // certain browsers have a bug such that scrollHeight is too small // when content does not fill the client area of the element var scrollHeight = Math.max(this.scrollHeight, this.clientHeight); this.scrollTop = scrollHeight - this.clientHeight; });...which will set the scroll offset to the last clientHeight worth of content. | [
"javascript",
"jquery",
"ajax",
"html",
"scroll"
] | 50 | 46 | 75,042 | 6 | 0 | 2008-08-16T21:04:17.250000 | 2008-08-16T21:10:20.827000 |
13,396 | 13,435 | Replacement for for... if array iteration | I love list comprehensions in Python, because they concisely represent a transformation of a list. However, in other languages, I frequently find myself writing something along the lines of: foreach (int x in intArray) if (x > 3) //generic condition on x x++ //do other processing This example is in C#, where I'm under the impression LINQ can help with this, but is there some common programming construct which can replace this slightly less-than-elegant solution? Perhaps a data structure I'm not considering? | The increment in the original foreach loop will not affect the contents of the array, the only way to do this remains a for loop: for(int i = 0; i < intArray.Length; ++i) { if(intArray[i] > 3) ++intArray[i]; } Linq is not intended to modify existing collections or sequences. It creates new sequences based on existing ones. It is possible to achieve the above code using Linq, though it is slightly against its purposes: var newArray1 = from i in intArray select ((i > 3)? (i + 1): (i)); var newArray2 = intArray.Select(i => (i > 3)? (i + 1): (i)); Using where (or equivalent), as shown in some of the other answers, will exclude any values less than or equal to 3 from the resulting sequence. var intArray = new int[] { 10, 1, 20, 2 }; var newArray = from i in intArray where i > 3 select i + 1; // newArray == { 11, 21 } There is a ForEach method on arrays that will allow you to use a lambda function instead of a foreach block, though for anything more than a method call I would stick with foreach. intArray.ForEach(i => DoSomething(i)); | Replacement for for... if array iteration I love list comprehensions in Python, because they concisely represent a transformation of a list. However, in other languages, I frequently find myself writing something along the lines of: foreach (int x in intArray) if (x > 3) //generic condition on x x++ //do other processing This example is in C#, where I'm under the impression LINQ can help with this, but is there some common programming construct which can replace this slightly less-than-elegant solution? Perhaps a data structure I'm not considering? | TITLE:
Replacement for for... if array iteration
QUESTION:
I love list comprehensions in Python, because they concisely represent a transformation of a list. However, in other languages, I frequently find myself writing something along the lines of: foreach (int x in intArray) if (x > 3) //generic condition on x x++ //do other processing This example is in C#, where I'm under the impression LINQ can help with this, but is there some common programming construct which can replace this slightly less-than-elegant solution? Perhaps a data structure I'm not considering?
ANSWER:
The increment in the original foreach loop will not affect the contents of the array, the only way to do this remains a for loop: for(int i = 0; i < intArray.Length; ++i) { if(intArray[i] > 3) ++intArray[i]; } Linq is not intended to modify existing collections or sequences. It creates new sequences based on existing ones. It is possible to achieve the above code using Linq, though it is slightly against its purposes: var newArray1 = from i in intArray select ((i > 3)? (i + 1): (i)); var newArray2 = intArray.Select(i => (i > 3)? (i + 1): (i)); Using where (or equivalent), as shown in some of the other answers, will exclude any values less than or equal to 3 from the resulting sequence. var intArray = new int[] { 10, 1, 20, 2 }; var newArray = from i in intArray where i > 3 select i + 1; // newArray == { 11, 21 } There is a ForEach method on arrays that will allow you to use a lambda function instead of a foreach block, though for anything more than a method call I would stick with foreach. intArray.ForEach(i => DoSomething(i)); | [
".net",
"python",
"arrays",
"loops",
"iteration"
] | 12 | 7 | 665 | 6 | 0 | 2008-08-16T22:28:39.827000 | 2008-08-17T00:44:57.650000 |
13,409 | 13,412 | Do you have "Slack" time? | The CodePlex team has a Slack time policy, and it's worked out very well for them. Jim Newkirk and myself used it to work on the xUnit.net project. Jonathan Wanagel used it to work on SvnBridge. Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype. For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture. Have you had a formalized Slack policy on your team? How did it work out? Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time:-p). | I just want to mention Google's policy on the subject. 20% of the day should be used for private projects and research. I think it is time for managers to face the fact that most good developers are a bit lazy. If they weren't, we wouldn't have concepts like code reuse. If this laziness can be focused into a creative force, and the developers can read up on technical issues and experiment with architecture and language features, I am certain that the end result will be better code and a more satisfied developer. So, if you are a manager: Let your developers slack of now and then. Encourage them to hold small seminars with the team to discuss new ways of doing stuff. If you are a developer: Read, learn and love your craft. You have one of the best jobs in the world, as long as you are willing to put some time into learning the best ways to do your job. | Do you have "Slack" time? The CodePlex team has a Slack time policy, and it's worked out very well for them. Jim Newkirk and myself used it to work on the xUnit.net project. Jonathan Wanagel used it to work on SvnBridge. Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype. For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture. Have you had a formalized Slack policy on your team? How did it work out? Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time:-p). | TITLE:
Do you have "Slack" time?
QUESTION:
The CodePlex team has a Slack time policy, and it's worked out very well for them. Jim Newkirk and myself used it to work on the xUnit.net project. Jonathan Wanagel used it to work on SvnBridge. Scott Densmore and myself used it to work on an ObjectBuilder 2.0 prototype. For others, it was a great time to explore things that were technically not on the schedule, but could eventually end up being of great use to the rest of the team. I'm so convinced of the value of this that if I'm ever running a team again, I'm going to make it part of the team culture. Have you had a formalized Slack policy on your team? How did it work out? Edited: I just realized I didn't define Slack. For those who haven't read the book, Slack is what Google's "20% time" is: you're given some slice of your day/week/month/year on which to work on things that are not necessarily directly related to your day-to-day job, but might have an indirect benefit (obviously if you work on stuff that's totally not useful for your job or your company, your manager probably won't think very well of the way you spent the time:-p).
ANSWER:
I just want to mention Google's policy on the subject. 20% of the day should be used for private projects and research. I think it is time for managers to face the fact that most good developers are a bit lazy. If they weren't, we wouldn't have concepts like code reuse. If this laziness can be focused into a creative force, and the developers can read up on technical issues and experiment with architecture and language features, I am certain that the end result will be better code and a more satisfied developer. So, if you are a manager: Let your developers slack of now and then. Encourage them to hold small seminars with the team to discuss new ways of doing stuff. If you are a developer: Read, learn and love your craft. You have one of the best jobs in the world, as long as you are willing to put some time into learning the best ways to do your job. | [
"time-management"
] | 21 | 19 | 2,814 | 6 | 0 | 2008-08-16T23:17:23.317000 | 2008-08-16T23:30:13.837000 |
13,430 | 13,759 | Does CruiseControl.NET run on IIS 7.0? | I'm new to development (an admin by trade) and I'm setting up my development environment and I would like to set up a CruiseControl.Net server on Server 2008. A quick Google did not turn up any instructions for getting it running on IIS 7.0, so I was wondering if anyone had experience getting this set up. | What Dale Ragan said; it installed flawlessly on our Windows Server 2008 machine, including the Dashboard running on IIS 7. Just give it a shot; should work fine. | Does CruiseControl.NET run on IIS 7.0? I'm new to development (an admin by trade) and I'm setting up my development environment and I would like to set up a CruiseControl.Net server on Server 2008. A quick Google did not turn up any instructions for getting it running on IIS 7.0, so I was wondering if anyone had experience getting this set up. | TITLE:
Does CruiseControl.NET run on IIS 7.0?
QUESTION:
I'm new to development (an admin by trade) and I'm setting up my development environment and I would like to set up a CruiseControl.Net server on Server 2008. A quick Google did not turn up any instructions for getting it running on IIS 7.0, so I was wondering if anyone had experience getting this set up.
ANSWER:
What Dale Ragan said; it installed flawlessly on our Windows Server 2008 machine, including the Dashboard running on IIS 7. Just give it a shot; should work fine. | [
"iis-7",
"cruisecontrol.net"
] | 8 | 4 | 3,721 | 4 | 0 | 2008-08-17T00:37:17.680000 | 2008-08-17T17:38:12.783000 |
13,460 | 13,465 | The theory (and terminology) behind Source Control | I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system? | I recommend checking out the following from Eric Sink: http://www.ericsink.com/scm/source_control.html Having some sort of revision control system in place is probably the most important tool a programmer has for reviewing code changes and understanding who did what to whom. Even for single person projects, it is invaluable to be able to diff current code against previous known working version to understand what might have gone wrong due to a change. | The theory (and terminology) behind Source Control I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system? | TITLE:
The theory (and terminology) behind Source Control
QUESTION:
I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system?
ANSWER:
I recommend checking out the following from Eric Sink: http://www.ericsink.com/scm/source_control.html Having some sort of revision control system in place is probably the most important tool a programmer has for reviewing code changes and understanding who did what to whom. Even for single person projects, it is invaluable to be able to diff current code against previous known working version to understand what might have gone wrong due to a change. | [
"version-control"
] | 7 | 6 | 1,225 | 5 | 0 | 2008-08-17T01:38:24.797000 | 2008-08-17T01:44:24.237000 |
13,470 | 13,484 | Purpose of {1} in this regular expression to match url protocols | I was reading this question about how to parse URLs out of web pages and had a question about the accepted answer which offered this solution: ((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+) The solution was offered by csmba and he credited it to regexlib.com. Whew. Credits done. I think this is a fairly naive regular expression but it's a fine starting point for building something better. But, my question is this: What is the point of {1}? It means "exactly one of the previous grouping", right? Isn't that the default behavior of a grouping in a regular expression? Would the expression be changed in any way if the {1} were removed? If I saw this from a coworker I would point out his or her error but as I write this the response is rated at a 6 and the expression on regexlib.com is rated a 4 of 5. So maybe I'm missing something? | @Jeff Atwood, your interpretation is a little off - the {1} means match exactly once, but has no effect on the "capturing" - the capturing occurs because of the parens - the braces only specify the number of times the pattern must match the source - once, as you say. I agree with @Marius, even if his answer is a little terse and may come off as being flippant. Regular expressions are tough, if one's not used to using them, and the {1} in the question isn't quite error - in systems that support it, it does mean "exactly one match". In this sense, it doesn't really do anything. Unfortunately, contrary to a now-deleted post, it doesn't keep the regexp from matching http://http://example.org, since the \S+ at the end will match one or more non-whitespace characters, including the http://example.org in http://http://example.org (verified using Python 2.5, just in case my regexp reading was off). So, the regexp given isn't really the best. I'm not a URL expert, but probably something limiting the appearance of ":"s and "//"s after the first one would be necessary (but hardly sufficient) to ensure good URLs. | Purpose of {1} in this regular expression to match url protocols I was reading this question about how to parse URLs out of web pages and had a question about the accepted answer which offered this solution: ((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+) The solution was offered by csmba and he credited it to regexlib.com. Whew. Credits done. I think this is a fairly naive regular expression but it's a fine starting point for building something better. But, my question is this: What is the point of {1}? It means "exactly one of the previous grouping", right? Isn't that the default behavior of a grouping in a regular expression? Would the expression be changed in any way if the {1} were removed? If I saw this from a coworker I would point out his or her error but as I write this the response is rated at a 6 and the expression on regexlib.com is rated a 4 of 5. So maybe I'm missing something? | TITLE:
Purpose of {1} in this regular expression to match url protocols
QUESTION:
I was reading this question about how to parse URLs out of web pages and had a question about the accepted answer which offered this solution: ((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+) The solution was offered by csmba and he credited it to regexlib.com. Whew. Credits done. I think this is a fairly naive regular expression but it's a fine starting point for building something better. But, my question is this: What is the point of {1}? It means "exactly one of the previous grouping", right? Isn't that the default behavior of a grouping in a regular expression? Would the expression be changed in any way if the {1} were removed? If I saw this from a coworker I would point out his or her error but as I write this the response is rated at a 6 and the expression on regexlib.com is rated a 4 of 5. So maybe I'm missing something?
ANSWER:
@Jeff Atwood, your interpretation is a little off - the {1} means match exactly once, but has no effect on the "capturing" - the capturing occurs because of the parens - the braces only specify the number of times the pattern must match the source - once, as you say. I agree with @Marius, even if his answer is a little terse and may come off as being flippant. Regular expressions are tough, if one's not used to using them, and the {1} in the question isn't quite error - in systems that support it, it does mean "exactly one match". In this sense, it doesn't really do anything. Unfortunately, contrary to a now-deleted post, it doesn't keep the regexp from matching http://http://example.org, since the \S+ at the end will match one or more non-whitespace characters, including the http://example.org in http://http://example.org (verified using Python 2.5, just in case my regexp reading was off). So, the regexp given isn't really the best. I'm not a URL expert, but probably something limiting the appearance of ":"s and "//"s after the first one would be necessary (but hardly sufficient) to ensure good URLs. | [
"regex"
] | 4 | 1 | 938 | 4 | 0 | 2008-08-17T02:01:56.230000 | 2008-08-17T02:56:56.337000 |
13,473 | 13,481 | How does one rank an array (sort) by value? *With a twist* | I would like to sort an array in ascending order using C/C++. The outcome is an array containing element indexes. Each index is corespondent to the element location in the sorted array. Example Input: 1, 3, 4, 9, 6 Output: 1, 2, 3, 5, 4 Edit: I am using shell sort procedure. The duplicate value indexes are arbitrarily chosen based on which duplicate values are first in the original array. Update: Despite my best efforts, I haven't been able to implement a sorting algorithm for an array of pointers. The current example won't compile. Could someone please tell me what's wrong? I'd very much appreciate some help! void SortArray(int ** pArray, int ArrayLength) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no *
for (i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength - 1); j++) { if (*pArray[j + 1] > *pArray[j]) // ascending order simply changes to < { &temp = &pArray[j]; // swap elements &pArray[j] = &pArray[j + 1]; //the problem lies somewhere in here &pArray[j + 1] = &temp flag = 1; // indicates that a swap occurred. } } } }; | Since you're using C++, I would do it something like this. The SortIntPointers function can be any sort algorithm, the important part is that it sorts the array of pointers based on the int that they are pointing to. Once that is done, you can go through the array of pointers and assign their sorted index which will end up in the original position in the original array. int* intArray; // set somewhere else int arrayLen; // set somewhere else
int** pintArray = new int*[arrayLen]; for(int i = 0; i < arrayLen; ++i) { pintArray[i] = &intArray[i]; }
// This function sorts the pointers according to the values they // point to. In effect, it sorts intArray without losing the positional // information. SortIntPointers(pintArray, arrayLen);
// Dereference the pointers and assign their sorted position. for(int i = 0; i < arrayLen; ++i) { *pintArray[i] = i; } Hopefully that's clear enough. | How does one rank an array (sort) by value? *With a twist* I would like to sort an array in ascending order using C/C++. The outcome is an array containing element indexes. Each index is corespondent to the element location in the sorted array. Example Input: 1, 3, 4, 9, 6 Output: 1, 2, 3, 5, 4 Edit: I am using shell sort procedure. The duplicate value indexes are arbitrarily chosen based on which duplicate values are first in the original array. Update: Despite my best efforts, I haven't been able to implement a sorting algorithm for an array of pointers. The current example won't compile. Could someone please tell me what's wrong? I'd very much appreciate some help! void SortArray(int ** pArray, int ArrayLength) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no *
for (i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength - 1); j++) { if (*pArray[j + 1] > *pArray[j]) // ascending order simply changes to < { &temp = &pArray[j]; // swap elements &pArray[j] = &pArray[j + 1]; //the problem lies somewhere in here &pArray[j + 1] = &temp flag = 1; // indicates that a swap occurred. } } } }; | TITLE:
How does one rank an array (sort) by value? *With a twist*
QUESTION:
I would like to sort an array in ascending order using C/C++. The outcome is an array containing element indexes. Each index is corespondent to the element location in the sorted array. Example Input: 1, 3, 4, 9, 6 Output: 1, 2, 3, 5, 4 Edit: I am using shell sort procedure. The duplicate value indexes are arbitrarily chosen based on which duplicate values are first in the original array. Update: Despite my best efforts, I haven't been able to implement a sorting algorithm for an array of pointers. The current example won't compile. Could someone please tell me what's wrong? I'd very much appreciate some help! void SortArray(int ** pArray, int ArrayLength) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no *
for (i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength - 1); j++) { if (*pArray[j + 1] > *pArray[j]) // ascending order simply changes to < { &temp = &pArray[j]; // swap elements &pArray[j] = &pArray[j + 1]; //the problem lies somewhere in here &pArray[j + 1] = &temp flag = 1; // indicates that a swap occurred. } } } };
ANSWER:
Since you're using C++, I would do it something like this. The SortIntPointers function can be any sort algorithm, the important part is that it sorts the array of pointers based on the int that they are pointing to. Once that is done, you can go through the array of pointers and assign their sorted index which will end up in the original position in the original array. int* intArray; // set somewhere else int arrayLen; // set somewhere else
int** pintArray = new int*[arrayLen]; for(int i = 0; i < arrayLen; ++i) { pintArray[i] = &intArray[i]; }
// This function sorts the pointers according to the values they // point to. In effect, it sorts intArray without losing the positional // information. SortIntPointers(pintArray, arrayLen);
// Dereference the pointers and assign their sorted position. for(int i = 0; i < arrayLen; ++i) { *pintArray[i] = i; } Hopefully that's clear enough. | [
"c++",
"c",
"arrays",
"sorting"
] | 6 | 7 | 16,867 | 7 | 0 | 2008-08-17T02:13:55.533000 | 2008-08-17T02:33:32.687000 |
13,485 | 13,495 | How do you keep two related, but separate, systems in sync with each other? | My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility. The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database. The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world. The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application. So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website? It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick. So far, I have thought about using the following types of approaches: Bi-directional replication Web service interfaces on both sides with code to sync the changes as they are made (in real time). Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism). Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you? | This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal. You should be able to achieve near real time synchronization without the overhead or complexity of something like replication. Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve. There are some open source tools that can really make this easy for you if you are using.NET (especially if you want to use MSMQ). nServiceBus by Udi Dahan Mass Transit by Dru Sellers and Chris Patterson There are commercial products also, and if you are considering a commercial option see here for a list of of options on.NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job. If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ. You may also want to consider reading Udi Dahan' s blog, listening to some of his podcasts. Here are some more good resources to get you started. | How do you keep two related, but separate, systems in sync with each other? My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility. The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database. The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world. The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application. So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website? It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick. So far, I have thought about using the following types of approaches: Bi-directional replication Web service interfaces on both sides with code to sync the changes as they are made (in real time). Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism). Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you? | TITLE:
How do you keep two related, but separate, systems in sync with each other?
QUESTION:
My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility. The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database. The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world. The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application. So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website? It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick. So far, I have thought about using the following types of approaches: Bi-directional replication Web service interfaces on both sides with code to sync the changes as they are made (in real time). Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism). Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you?
ANSWER:
This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal. You should be able to achieve near real time synchronization without the overhead or complexity of something like replication. Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve. There are some open source tools that can really make this easy for you if you are using.NET (especially if you want to use MSMQ). nServiceBus by Udi Dahan Mass Transit by Dru Sellers and Chris Patterson There are commercial products also, and if you are considering a commercial option see here for a list of of options on.NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job. If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ. You may also want to consider reading Udi Dahan' s blog, listening to some of his podcasts. Here are some more good resources to get you started. | [
"sql-server",
"database",
"synchronization",
"distributed"
] | 24 | 25 | 10,971 | 5 | 0 | 2008-08-17T02:57:46.633000 | 2008-08-17T03:22:49.017000 |
13,498 | 13,505 | simultaneous Outlook reminders on multiple devices | Disclaimer: This is not actually a programming question, but I feel the audience on stackoverflow is more likely to have an answer than most question/answer sites out there. Please forgive me, Joel, for stealing your question. Joel asked this question on a podcast a while back but I don't think it ever got resolved. I'm in the same situation, so I'm also looking for the answer. I have multiple devices that all sync with MS-Outlook. PCs, Laptops, Smartphones, PDAs, etc. all have the capability to synchronize their data (calendars, emails, contacts, etc.) with the Exchange server. I like to use the Outlook meeting notice or appointment reminders to remind me of an upcoming meeting or doctors appointment or whatever. The problem lies in the fact that all the devices pop up the same reminder and I have to go to every single device individually in order to snooze or dismiss all of the identical the reminder popups. Since this is a sync'ing technology, why doesn't the fact that I snooze or dismiss on one device sync up the other devices automatically. I've even tried to force a sync after dismissing a reminder and it still shows up on my other devices after a forced sync. This is utterly annoying to me. Is there a setting that I'm overlooking or is there a 3rd party reminder utility that I should be using instead of the built-in stuff? Thanks, Kurt | At least for PCs, the fact that you dismiss an item does get sync'd, and fairly quickly for me. I'm not sure why phones don't seem to do it, though. Maybe the ActiveSync protocol doesn't offer that option. | simultaneous Outlook reminders on multiple devices Disclaimer: This is not actually a programming question, but I feel the audience on stackoverflow is more likely to have an answer than most question/answer sites out there. Please forgive me, Joel, for stealing your question. Joel asked this question on a podcast a while back but I don't think it ever got resolved. I'm in the same situation, so I'm also looking for the answer. I have multiple devices that all sync with MS-Outlook. PCs, Laptops, Smartphones, PDAs, etc. all have the capability to synchronize their data (calendars, emails, contacts, etc.) with the Exchange server. I like to use the Outlook meeting notice or appointment reminders to remind me of an upcoming meeting or doctors appointment or whatever. The problem lies in the fact that all the devices pop up the same reminder and I have to go to every single device individually in order to snooze or dismiss all of the identical the reminder popups. Since this is a sync'ing technology, why doesn't the fact that I snooze or dismiss on one device sync up the other devices automatically. I've even tried to force a sync after dismissing a reminder and it still shows up on my other devices after a forced sync. This is utterly annoying to me. Is there a setting that I'm overlooking or is there a 3rd party reminder utility that I should be using instead of the built-in stuff? Thanks, Kurt | TITLE:
simultaneous Outlook reminders on multiple devices
QUESTION:
Disclaimer: This is not actually a programming question, but I feel the audience on stackoverflow is more likely to have an answer than most question/answer sites out there. Please forgive me, Joel, for stealing your question. Joel asked this question on a podcast a while back but I don't think it ever got resolved. I'm in the same situation, so I'm also looking for the answer. I have multiple devices that all sync with MS-Outlook. PCs, Laptops, Smartphones, PDAs, etc. all have the capability to synchronize their data (calendars, emails, contacts, etc.) with the Exchange server. I like to use the Outlook meeting notice or appointment reminders to remind me of an upcoming meeting or doctors appointment or whatever. The problem lies in the fact that all the devices pop up the same reminder and I have to go to every single device individually in order to snooze or dismiss all of the identical the reminder popups. Since this is a sync'ing technology, why doesn't the fact that I snooze or dismiss on one device sync up the other devices automatically. I've even tried to force a sync after dismissing a reminder and it still shows up on my other devices after a forced sync. This is utterly annoying to me. Is there a setting that I'm overlooking or is there a 3rd party reminder utility that I should be using instead of the built-in stuff? Thanks, Kurt
ANSWER:
At least for PCs, the fact that you dismiss an item does get sync'd, and fairly quickly for me. I'm not sure why phones don't seem to do it, though. Maybe the ActiveSync protocol doesn't offer that option. | [
"windows",
"outlook",
"synchronization",
"reminders"
] | 8 | 2 | 4,014 | 4 | 0 | 2008-08-17T03:25:36.607000 | 2008-08-17T04:05:15.403000 |
13,518 | 13,521 | Browser Sync across many machines | Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain. This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever. There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted. | Mozilla Weave is capable of running on personal servers. It uses WebDAV to communicate with HTTP servers and can be configured to connect to private servers. I've tried setting it up on my own servers but with no success (Mainly because I'm not very good at working with Apache to configure WebDAV) I'm hoping Mozilla Weave eventually allows FTP access so I can easily use my server to host my firefox profile. If you're interested in trying Mozilla Weave on a personal server, there's a tutorial here: http://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/ | Browser Sync across many machines Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain. This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever. There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted. | TITLE:
Browser Sync across many machines
QUESTION:
Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain. This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever. There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted.
ANSWER:
Mozilla Weave is capable of running on personal servers. It uses WebDAV to communicate with HTTP servers and can be configured to connect to private servers. I've tried setting it up on my own servers but with no success (Mainly because I'm not very good at working with Apache to configure WebDAV) I'm hoping Mozilla Weave eventually allows FTP access so I can easily use my server to host my firefox profile. If you're interested in trying Mozilla Weave on a personal server, there's a tutorial here: http://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/ | [
"firefox",
"browser",
"synchronization"
] | 2 | 7 | 1,744 | 4 | 0 | 2008-08-17T04:58:38.510000 | 2008-08-17T05:17:05.697000 |
13,537 | 13,552 | Bootstrapping still requires outside support | I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either writing an initial compiler in a different language. hand-coding an initial compiler in Assembly, which seems like a special case of the first To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language? | Is there a way to actually write a compiler in its own language? You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code. If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself. There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing. It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html. | Bootstrapping still requires outside support I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either writing an initial compiler in a different language. hand-coding an initial compiler in Assembly, which seems like a special case of the first To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language? | TITLE:
Bootstrapping still requires outside support
QUESTION:
I've heard of the idea of bootstrapping a language, that is, writing a compiler/interpreter for the language in itself. I was wondering how this could be accomplished and looked around a bit, and saw someone say that it could only be done by either writing an initial compiler in a different language. hand-coding an initial compiler in Assembly, which seems like a special case of the first To me, neither of these seem to actually be bootstrapping a language in the sense that they both require outside support. Is there a way to actually write a compiler in its own language?
ANSWER:
Is there a way to actually write a compiler in its own language? You have to have some existing language to write your new compiler in. If you were writing a new, say, C++ compiler, you would just write it in C++ and compile it with an existing compiler first. On the other hand, if you were creating a compiler for a new language, let's call it Yazzleof, you would need to write the new compiler in another language first. Generally, this would be another programming language, but it doesn't have to be. It can be assembly, or if necessary, machine code. If you were going to bootstrap a compiler for Yazzleof, you generally wouldn't write a compiler for the full language initially. Instead you would write a compiler for Yazzle-lite, the smallest possible subset of the Yazzleof (well, a pretty small subset at least). Then in Yazzle-lite, you would write a compiler for the full language. (Obviously this can occur iteratively instead of in one jump.) Because Yazzle-lite is a proper subset of Yazzleof, you now have a compiler which can compile itself. There is a really good writeup about bootstrapping a compiler from the lowest possible level (which on a modern machine is basically a hex editor), titled Bootstrapping a simple compiler from nothing. It can be found at https://web.archive.org/web/20061108010907/http://www.rano.org/bcompiler.html. | [
"compiler-construction",
"language-agnostic",
"bootstrapping"
] | 104 | 114 | 19,995 | 12 | 0 | 2008-08-17T06:46:11.210000 | 2008-08-17T07:20:55.103000 |
13,540 | 13,554 | Insert Update stored proc on SQL Server | I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this: update myTable set Col1=@col1, Col2=@col2 where ID=@ID if @@rowcount = 0 insert into myTable (Col1, Col2) values (@col1, @col2) My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place. The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency. Is my logic sound here? Is this how you would combine an insert and update into a stored proc? | Your assumption is right, this is the optimal way to do it and it's called upsert/merge. Importance of UPSERT - from sqlservercentral.com: For every update in the case mentioned above we are removing one additional read from the table if we use the UPSERT instead of EXISTS. Unfortunately for an Insert, both the UPSERT and IF EXISTS methods use the same number of reads on the table. Therefore the check for existence should only be done when there is a very valid reason to justify the additional I/O. The optimized way to do things is to make sure that you have little reads as possible on the DB. The best strategy is to attempt the update. If no rows are affected by the update then insert. In most circumstances, the row will already exist and only one I/O will be required. Edit: Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe. | Insert Update stored proc on SQL Server I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this: update myTable set Col1=@col1, Col2=@col2 where ID=@ID if @@rowcount = 0 insert into myTable (Col1, Col2) values (@col1, @col2) My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place. The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency. Is my logic sound here? Is this how you would combine an insert and update into a stored proc? | TITLE:
Insert Update stored proc on SQL Server
QUESTION:
I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this: update myTable set Col1=@col1, Col2=@col2 where ID=@ID if @@rowcount = 0 insert into myTable (Col1, Col2) values (@col1, @col2) My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place. The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency. Is my logic sound here? Is this how you would combine an insert and update into a stored proc?
ANSWER:
Your assumption is right, this is the optimal way to do it and it's called upsert/merge. Importance of UPSERT - from sqlservercentral.com: For every update in the case mentioned above we are removing one additional read from the table if we use the UPSERT instead of EXISTS. Unfortunately for an Insert, both the UPSERT and IF EXISTS methods use the same number of reads on the table. Therefore the check for existence should only be done when there is a very valid reason to justify the additional I/O. The optimized way to do things is to make sure that you have little reads as possible on the DB. The best strategy is to attempt the update. If no rows are affected by the update then insert. In most circumstances, the row will already exist and only one I/O will be required. Edit: Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe. | [
"sql",
"sql-server",
"stored-procedures",
"upsert"
] | 112 | 63 | 55,488 | 9 | 0 | 2008-08-17T06:48:50.487000 | 2008-08-17T07:22:43.960000 |
13,545 | 16,139 | .NET 3.5 SP1 and aspnet_client Crystal Reports | I recently (a few days ago) installed.NET 3.5 SP1 and subsequently an aspnet_client folder with a bunch of Crystal Reports support code has been injected into my.net web apps. Anybody else experienced this? Am I correct in saying that this is a side effect of SP1? What is this? | No it is a side effect of Crystal Reports. If you don't need it, remove it from your computer it is nothing but a headache. It is safe to delete the aspnet_client folder. | .NET 3.5 SP1 and aspnet_client Crystal Reports I recently (a few days ago) installed.NET 3.5 SP1 and subsequently an aspnet_client folder with a bunch of Crystal Reports support code has been injected into my.net web apps. Anybody else experienced this? Am I correct in saying that this is a side effect of SP1? What is this? | TITLE:
.NET 3.5 SP1 and aspnet_client Crystal Reports
QUESTION:
I recently (a few days ago) installed.NET 3.5 SP1 and subsequently an aspnet_client folder with a bunch of Crystal Reports support code has been injected into my.net web apps. Anybody else experienced this? Am I correct in saying that this is a side effect of SP1? What is this?
ANSWER:
No it is a side effect of Crystal Reports. If you don't need it, remove it from your computer it is nothing but a headache. It is safe to delete the aspnet_client folder. | [
".net",
"asp.net",
".net-3.5",
"crystal-reports"
] | 0 | 1 | 1,957 | 2 | 0 | 2008-08-17T07:00:25.677000 | 2008-08-19T13:20:16.953000 |
13,550 | 13,668 | Productivity gains of using CASE tools for development | I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and... I would say... satisfied. In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called " No Silver Bullet ". This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project. I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed? | We use a CASE tool at my current company for code generation and we are trying to move away from it. The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion. Those main disadvantages are: We cannot do automatic merges, making it close to impossible for parallel development on one component. Developers get dependant on the tool and 'forget' how to handcode. | Productivity gains of using CASE tools for development I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and... I would say... satisfied. In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called " No Silver Bullet ". This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project. I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed? | TITLE:
Productivity gains of using CASE tools for development
QUESTION:
I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and... I would say... satisfied. In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called " No Silver Bullet ". This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project. I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
ANSWER:
We use a CASE tool at my current company for code generation and we are trying to move away from it. The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion. Those main disadvantages are: We cannot do automatic merges, making it close to impossible for parallel development on one component. Developers get dependant on the tool and 'forget' how to handcode. | [
"case-tools"
] | 2 | 1 | 566 | 4 | 0 | 2008-08-17T07:08:41.383000 | 2008-08-17T15:21:50.513000 |
13,578 | 13,600 | Determining how long the user is logged on to Windows | The need arose, in our product, to determine how long the current user has been logged on to Windows (specifically, Vista). It seems there is no straight forward API function for this and I couldn't find anything relevant with WMI (although I'm no expert with WMI, so I might have missed something). Any ideas? | For people not familiar with WMI (like me), here are some links: MSDN page on using WMI from various languages: http://msdn.microsoft.com/en-us/library/aa393964(VS.85).aspx reference about Win32_Session: http://msdn.microsoft.com/en-us/library/aa394422(VS.85).aspx, but the objects in Win32_session are of type Win32_LogonSession ( http://msdn.microsoft.com/en-us/library/aa394189(VS.85).aspx ), which has more interesting properties. WMI Explorer - a tool you can use to easily run queries like the one Michal posted. And here's example querying Win32_Session from VBS: strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set sessions = objWMIService.ExecQuery _ ("select * from Win32_Session")
For Each objSession in sessions Wscript.Echo objSession.StartTime Next It alerts 6 sessions for my personal computer, perhaps you can filter by LogonType to only list the real ("interactive") users. I couldn't see how you can select the session of the "current user". [edit] and here's a result from Google to your problem: http://forum.sysinternals.com/forum_posts.asp?TID=3755 | Determining how long the user is logged on to Windows The need arose, in our product, to determine how long the current user has been logged on to Windows (specifically, Vista). It seems there is no straight forward API function for this and I couldn't find anything relevant with WMI (although I'm no expert with WMI, so I might have missed something). Any ideas? | TITLE:
Determining how long the user is logged on to Windows
QUESTION:
The need arose, in our product, to determine how long the current user has been logged on to Windows (specifically, Vista). It seems there is no straight forward API function for this and I couldn't find anything relevant with WMI (although I'm no expert with WMI, so I might have missed something). Any ideas?
ANSWER:
For people not familiar with WMI (like me), here are some links: MSDN page on using WMI from various languages: http://msdn.microsoft.com/en-us/library/aa393964(VS.85).aspx reference about Win32_Session: http://msdn.microsoft.com/en-us/library/aa394422(VS.85).aspx, but the objects in Win32_session are of type Win32_LogonSession ( http://msdn.microsoft.com/en-us/library/aa394189(VS.85).aspx ), which has more interesting properties. WMI Explorer - a tool you can use to easily run queries like the one Michal posted. And here's example querying Win32_Session from VBS: strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set sessions = objWMIService.ExecQuery _ ("select * from Win32_Session")
For Each objSession in sessions Wscript.Echo objSession.StartTime Next It alerts 6 sessions for my personal computer, perhaps you can filter by LogonType to only list the real ("interactive") users. I couldn't see how you can select the session of the "current user". [edit] and here's a result from Google to your problem: http://forum.sysinternals.com/forum_posts.asp?TID=3755 | [
"windows"
] | 9 | 4 | 30,751 | 5 | 0 | 2008-08-17T10:22:25.877000 | 2008-08-17T12:24:51.310000 |
13,586 | 13,605 | Interpreted languages - leveraging the compiled language behind the interpreter | If there are any language designers out there (or people simply in the know), I'm curious about the methodology behind creating standard libraries for interpreted languages. Specifically, what seems to be the best approach? Defining standard functions/methods in the interpreted language, or performing the processing of those calls in the compiled language in which the interpreter is written? What got me to thinking about this was the SO question about a stripslashes()-like function in Python. My first thought was "why not define your own and just call it when you need it", but it raised the question: is it preferable, for such a function, to let the interpreted language handle that overhead, or would it be better to write an extension and leverage the compiled language behind the interpreter? | The line between "interpreted" and "compiled" languages is really fuzzy these days. For example, the first thing Python does when it sees source code is compile it into a bytecode representation, essentially the same as what Java does when compiling class files. This is what *.pyc files contain. Then, the python runtime executes the bytecode without referring to the original source. Traditionally, a purely interpreted language would refer to the source code continuously when executing the program. When building a language, it is a good approach to build a solid foundation on which you can implement the higher level functions. If you've got a solid, fast string handling system, then the language designer can (and should) implement something like stripslashes() outside the base runtime. This is done for at least a few reasons: The language designer can show that the language is flexible enough to handle that kind of task. The language designer actually writes real code in the language, which has tests and therefore shows that the foundation is solid. Other people can more easily read, borrow, and even change the higher level function without having to be able to build or even understand the language core. Just because a language like Python compiles to bytecode and executes that doesn't mean it is slow. There's no reason why somebody couldn't write a Just-In-Time (JIT) compiler for Python, along the lines of what Java and.NET already do, to further increase the performance. In fact, IronPython compiles Python directly to.NET bytecode, which is then run using the.NET system including the JIT. To answer your question directly, the only time a language designer would implement a function in the language behind the runtime (eg. C in the case of Python) would be to maximise the performance of that function. This is why modules such as the regular expression parser are written in C rather than native Python. On the other hand, a module like getopt.py is implemented in pure Python because it can all be done there and there's no benefit to using the corresponding C library. | Interpreted languages - leveraging the compiled language behind the interpreter If there are any language designers out there (or people simply in the know), I'm curious about the methodology behind creating standard libraries for interpreted languages. Specifically, what seems to be the best approach? Defining standard functions/methods in the interpreted language, or performing the processing of those calls in the compiled language in which the interpreter is written? What got me to thinking about this was the SO question about a stripslashes()-like function in Python. My first thought was "why not define your own and just call it when you need it", but it raised the question: is it preferable, for such a function, to let the interpreted language handle that overhead, or would it be better to write an extension and leverage the compiled language behind the interpreter? | TITLE:
Interpreted languages - leveraging the compiled language behind the interpreter
QUESTION:
If there are any language designers out there (or people simply in the know), I'm curious about the methodology behind creating standard libraries for interpreted languages. Specifically, what seems to be the best approach? Defining standard functions/methods in the interpreted language, or performing the processing of those calls in the compiled language in which the interpreter is written? What got me to thinking about this was the SO question about a stripslashes()-like function in Python. My first thought was "why not define your own and just call it when you need it", but it raised the question: is it preferable, for such a function, to let the interpreted language handle that overhead, or would it be better to write an extension and leverage the compiled language behind the interpreter?
ANSWER:
The line between "interpreted" and "compiled" languages is really fuzzy these days. For example, the first thing Python does when it sees source code is compile it into a bytecode representation, essentially the same as what Java does when compiling class files. This is what *.pyc files contain. Then, the python runtime executes the bytecode without referring to the original source. Traditionally, a purely interpreted language would refer to the source code continuously when executing the program. When building a language, it is a good approach to build a solid foundation on which you can implement the higher level functions. If you've got a solid, fast string handling system, then the language designer can (and should) implement something like stripslashes() outside the base runtime. This is done for at least a few reasons: The language designer can show that the language is flexible enough to handle that kind of task. The language designer actually writes real code in the language, which has tests and therefore shows that the foundation is solid. Other people can more easily read, borrow, and even change the higher level function without having to be able to build or even understand the language core. Just because a language like Python compiles to bytecode and executes that doesn't mean it is slow. There's no reason why somebody couldn't write a Just-In-Time (JIT) compiler for Python, along the lines of what Java and.NET already do, to further increase the performance. In fact, IronPython compiles Python directly to.NET bytecode, which is then run using the.NET system including the JIT. To answer your question directly, the only time a language designer would implement a function in the language behind the runtime (eg. C in the case of Python) would be to maximise the performance of that function. This is why modules such as the regular expression parser are written in C rather than native Python. On the other hand, a module like getopt.py is implemented in pure Python because it can all be done there and there's no benefit to using the corresponding C library. | [
"performance",
"language-agnostic",
"language-features",
"interpreted-language"
] | 5 | 6 | 1,166 | 4 | 0 | 2008-08-17T11:12:51.813000 | 2008-08-17T12:39:58.860000 |
13,589 | 32,081 | ASP.net AJAX Drag/Drop? | I wonder if someone knows if there is a pre-made solution for this: I have a List on an ASP.net Website, and I want that the User is able to re-sort the list through Drag and Drop. Additionally, I would love to have a second list to which the user can drag items from the first list onto. So far, I found two solutions: The ReorderList from the Ajax Control Toolkit, which requires a bit of manual work to make sure changes are persisted into the database, and that does not support drag/drop between lists. The RadGrid from Telerik which does all I want, but is priced far far beyond my Budget. Does anyone else have some ideas or at least some keywords/pointers to do further investigation on? Espectially the Drag/Drop between two lists is something I am rather clueless about how to do that in ASP.net. Target Framework is 3.0 by the way. | The Mootools sortables plugin does just that, and best of all, it's free;) http://demos.mootools.net/Sortables | ASP.net AJAX Drag/Drop? I wonder if someone knows if there is a pre-made solution for this: I have a List on an ASP.net Website, and I want that the User is able to re-sort the list through Drag and Drop. Additionally, I would love to have a second list to which the user can drag items from the first list onto. So far, I found two solutions: The ReorderList from the Ajax Control Toolkit, which requires a bit of manual work to make sure changes are persisted into the database, and that does not support drag/drop between lists. The RadGrid from Telerik which does all I want, but is priced far far beyond my Budget. Does anyone else have some ideas or at least some keywords/pointers to do further investigation on? Espectially the Drag/Drop between two lists is something I am rather clueless about how to do that in ASP.net. Target Framework is 3.0 by the way. | TITLE:
ASP.net AJAX Drag/Drop?
QUESTION:
I wonder if someone knows if there is a pre-made solution for this: I have a List on an ASP.net Website, and I want that the User is able to re-sort the list through Drag and Drop. Additionally, I would love to have a second list to which the user can drag items from the first list onto. So far, I found two solutions: The ReorderList from the Ajax Control Toolkit, which requires a bit of manual work to make sure changes are persisted into the database, and that does not support drag/drop between lists. The RadGrid from Telerik which does all I want, but is priced far far beyond my Budget. Does anyone else have some ideas or at least some keywords/pointers to do further investigation on? Espectially the Drag/Drop between two lists is something I am rather clueless about how to do that in ASP.net. Target Framework is 3.0 by the way.
ANSWER:
The Mootools sortables plugin does just that, and best of all, it's free;) http://demos.mootools.net/Sortables | [
"asp.net",
"ajax"
] | 2 | 2 | 5,723 | 4 | 0 | 2008-08-17T11:40:38.963000 | 2008-08-28T12:30:02.403000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.