text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Example of a build.xml for an EAR that deploys in WebSphere 6 I'm trying to convince my providers to use ANT instead of Rational Application Development so anyone can recompile, recheck, redeploy the solution anyplace, anytime, anyhow. :P
I started a build.xml for a project that generates a JAR file but stopped there and I need real examples to compare notes. My good friends! I don't have anyone close to chat about this!
This is my build.xml so far.
(*) I edited my question based in the suggestion of to use pastebin.ca
A: Here is some of the same functionality if you don't have the WAS ant tasks available or don't want to run was_ant.bat. This relies on wsadmin.bat existing in the path.
<property name="websphere.home.dir" value="${env.WS6_HOME}" />
<property name="was.server.name" value="server1" />
<property name="wsadmin.base.command" value="wsadmin.bat" />
<property name="ws.list.command" value="$AdminApp list" />
<property name="ws.install.command" value="$AdminApp install" />
<property name="ws.uninstall.command" value="$AdminApp uninstall" />
<property name="ws.save.command" value="$AdminConfig save" />
<property name="ws.setManager.command" value="set appManager [$AdminControl queryNames cell=${env.COMPUTERNAME}Node01Cell,node=${env.COMPUTERNAME}Node01,type=ApplicationManager,process=${was.server.name},*]" />
<property name="ws.startapp.command" value="$AdminControl invoke $appManager startApplication" />
<property name="ws.stopapp.command" value="$AdminControl invoke $appManager stopApplication" />
<property name="ws.conn.type" value="SOAP" />
<property name="ws.host.name" value="localhost" />
<property name="ws.port.name" value="8880" />
<property name="ws.user.name" value="username" />
<property name="ws.password.name" value="password" />
<property name="app.deployed.name" value="${artifact.filename}" />
<property name="app.contextroot.name" value="/${artifact.filename}" />
<target name="websphere-list-applications">
<exec dir="${websphere.home.dir}/bin" executable="${wsadmin.base.command}" output="waslist.txt" logError="true">
<arg line="-conntype ${ws.conn.type}" />
<arg line="-host ${ws.host.name}" />
<arg line="-port ${ws.port.name}" />
<arg line="-username ${ws.user.name}" />
<arg line="-password ${ws.password.name}" />
<arg line="-c" />
<arg value="${ws.list.command}" />
</exec>
</target>
<target name="websphere-install-application" depends="websphere-uninstall-application">
<exec executable="${websphere.home.dir}/bin/${wsadmin.base.command}" logError="true" outputproperty="websphere.install.output" failonerror="true">
<arg line="-conntype ${ws.conn.type}" />
<arg line="-host ${ws.host.name}" />
<arg line="-port ${ws.port.name}" />
<arg line="-username ${ws.user.name}" />
<arg line="-password ${ws.password.name}" />
<arg line="-c" />
<arg value="${ws.install.command} ${dist.dir}/${artifact.filename}.war {-appname ${app.deployed.name} -server ${was.server.name} -contextroot ${app.contextroot.name}}" />
<arg line="-c" />
<arg value="${ws.save.command}" />
<arg line="-c" />
<arg value="${ws.setManager.command}" />
<arg line="-c" />
<arg value="${ws.startapp.command} ${app.deployed.name}" />
<arg line="-c" />
<arg value="${ws.save.command}" />
</exec>
<echo message="${websphere.install.output}" />
</target>
<target name="websphere-uninstall-application">
<exec executable="${websphere.home.dir}/bin/${wsadmin.base.command}" logError="true" outputproperty="websphere.uninstall.output" failonerror="false">
<arg line="-conntype ${ws.conn.type}" />
<arg line="-host ${ws.host.name}" />
<arg line="-port ${ws.port.name}" />
<arg line="-username ${ws.user.name}" />
<arg line="-password ${ws.password.name}" />
<arg line="-c" />
<arg value="${ws.setManager.command}" />
<arg line="-c" />
<arg value="${ws.stopapp.command} ${app.deployed.name}" />
<arg line="-c" />
<arg value="${ws.save.command}" />
<arg line="-c" />
<arg value="${ws.uninstall.command} ${app.deployed.name}" />
<arg line="-c" />
<arg value="${ws.save.command}" />
</exec>
<echo message="${websphere.uninstall.output}" />
</target>
A: a good start point, could be this maven pluggin, not for use it, or maybe yes, but this maven is build over ant task. If you see WAS5+Plugin+Mojo.zip\src\main\scripts\was5.build.xml
Or as said "McDowell", you can use "WebSphere Application Server (WAS) Ant tasks" but directly as ANT task.
<path id="classpath">
<fileset file="com.ibm.websphere.v61_6.1.100.ws_runtime.jar"/>
</path>
<taskdef name="wsStartApp" classname="com.ibm.websphere.ant.tasks.StartApplication" classpathref="classpath" />
<taskdef name="wsStopApp" classname="com.ibm.websphere.ant.tasks.StopApplication" classpathref="classpath" />
<taskdef name="wsInstallApp" classname="com.ibm.websphere.ant.tasks.InstallApplication" classpathref="classpath" />
<taskdef name="wsUninstallApp" classname="com.ibm.websphere.ant.tasks.UninstallApplication" classpathref="classpath" />
<target name="startWebApp1" depends="installEar">
<wsStartApp wasHome="${wasHome.dir}"
application="${remoteAppName}"
server="${clusterServerName}"
conntype="${remoteProdConnType}"
host="${remoteProdHostName}"
port="${remoteProdPort}"
user="${remoteProdUserId}"
password="${remoteProdPassword}" />
</target>
<target name="stopWebApp1" depends="prepare">
<wsStopApp wasHome="${wasHome.dir}"
application="${remoteAppName}"
server="${clusterServerName}"
conntype="${remoteConnType}"
host="${remoteHostName}"
port="${remotePort}"
user="${remoteUserId}"
password="${remotePassword}"/>
</target>
<target name="uninstallEar" depends="stopWebApp1">
<wsUninstallApp wasHome="${wasHome.dir}"
application="${remoteAppName}"
options="-cell uatNetwork -cluster DOL"
conntype="${remoteConnType}"
host="${remoteHostName}"
port="${remoteDmgrPort}"
user="${remoteUserId}"
password="${remotePassword}"/>
</target>
<target name="installEar" depends="prepare">
<wsInstallApp ear="${existingEar.dir}/${existingEar}"
wasHome="${wasHome.dir}"
options="${install_app_options}"
conntype="${remoteConnType}"
host="${remoteHostName}"
port="${remoteDmgrPort}"
user="${remoteUserId}"
password="${remotePassword}" />
</target>
Another useful link could be this.
A: My Environment: Fedora 8; WAS 6.1 (as installed with Rational Application Developer 7)
The documentation is very poor in this area and there is a dearth of practical examples.
Using the WebSphere Application Server (WAS) Ant tasks
To run as described here, you need to run them from your server profile bin directory using the ws_ant.sh or ws_ant.bat commands.
<?xml version="1.0"?>
<project name="project" default="wasListApps" basedir=".">
<description>
Script for listing installed apps.
Example run from:
/opt/IBM/SDP70/runtimes/base_v61/profiles/AppSrv01/bin
</description>
<property name="was_home"
value="/opt/IBM/SDP70/runtimes/base_v61/">
</property>
<path id="was.runtime">
<fileset dir="${was_home}/lib">
<include name="**/*.jar" />
</fileset>
<fileset dir="${was_home}/plugins">
<include name="**/*.jar" />
</fileset>
</path>
<property name="was_cp" value="${toString:was.runtime}"></property>
<property environment="env"></property>
<target name="wasListApps">
<taskdef name="wsListApp"
classname="com.ibm.websphere.ant.tasks.ListApplications"
classpath="${was_cp}">
</taskdef>
<wsListApp wasHome="${was_home}" />
</target>
</project>
Command:
./ws_ant.sh -buildfile ~/IBM/rationalsdp7.0/workspace/mywebappDeploy/applist.xml
A Deployment Script
<?xml version="1.0"?>
<project name="project" default="default" basedir=".">
<description>
Build/Deploy an EAR to WebSphere Application Server 6.1
</description>
<property name="was_home" value="/opt/IBM/SDP70/runtimes/base_v61/" />
<path id="was.runtime">
<fileset dir="${was_home}/lib">
<include name="**/*.jar" />
</fileset>
<fileset dir="${was_home}/plugins">
<include name="**/*.jar" />
</fileset>
</path>
<property name="was_cp" value="${toString:was.runtime}" />
<property environment="env" />
<property name="ear" value="${env.HOME}/IBM/rationalsdp7.0/workspace/mywebappDeploy/mywebappEAR.ear" />
<target name="default" depends="deployEar">
</target>
<target name="generateWar" depends="compileWarClasses">
<jar destfile="mywebapp.war">
<fileset dir="../mywebapp/WebContent">
</fileset>
</jar>
</target>
<target name="compileWarClasses">
<echo message="was_cp=${was_cp}" />
<javac srcdir="../mywebapp/src" destdir="../mywebapp/WebContent/WEB-INF/classes" classpath="${was_cp}">
</javac>
</target>
<target name="generateEar" depends="generateWar">
<mkdir dir="./earbin/META-INF"/>
<move file="mywebapp.war" todir="./earbin" />
<copy file="../mywebappEAR/META-INF/application.xml" todir="./earbin/META-INF" />
<jar destfile="${ear}">
<fileset dir="./earbin" />
</jar>
</target>
<!-- http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.javadoc.doc/public_html/api/com/ibm/websphere/ant/tasks/package-summary.html -->
<target name="deployEar" depends="generateEar">
<taskdef name="wsInstallApp" classname="com.ibm.websphere.ant.tasks.InstallApplication" classpath="${was_cp}"/>
<wsInstallApp ear="${ear}"
failonerror="true"
debug="true"
taskname=""
washome="${was_home}" />
</target>
</project>
Notes:
*
*You can only run this once! You cannot install if the app name is in use - see other tasks like wsUninstallApp
*It probably won't start the app either
*You need to run this on the server and the script is quite fragile
Alternatives
I would probably use Java Management Extensions (JMX). You could write a file-upload servlet that accepts an EAR and uses the deployment MBeans to deploy the EAR on the server. You would just POST the file over HTTP. This would avoid any WAS API dependencies on your dev/build machine and could be independent of any one project.
A: If you just want to play around why not use the netbeans IDE to generate your ear files. If you create an enterprise project it will automatically generate the ant files for you. Good for prototyping and just getting started :-)
There is even a was plugin which allows automated deployment however this seems very shakey!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Passing multidimensional arrays as function arguments in C In C can I pass a multidimensional array to a function as a single argument when I don't know what the dimensions of the array are going to be?
Besides, my multidimensional array may contain types other than strings.
A: Pass an explicit pointer to the first element with the array dimensions as separate parameters. For example, to handle arbitrarily sized 2-d arrays of int:
void func_2d(int *p, size_t M, size_t N)
{
size_t i, j;
...
p[i*N+j] = ...;
}
which would be called as
...
int arr1[10][20];
int arr2[5][80];
...
func_2d(&arr1[0][0], 10, 20);
func_2d(&arr2[0][0], 5, 80);
Same principle applies for higher-dimension arrays:
func_3d(int *p, size_t X, size_t Y, size_t Z)
{
size_t i, j, k;
...
p[i*Y*Z+j*Z+k] = ...;
...
}
...
arr2[10][20][30];
...
func_3d(&arr[0][0][0], 10, 20, 30);
A: You can declare your function as:
f(int size, int data[][size]) {...}
The compiler will then do all pointer arithmetic for you.
Note that the dimensions sizes must appear before the array itself.
GNU C allows for argument declaration forwarding (in case you really need to pass dimensions after the array):
f(int size; int data[][size], int size) {...}
The first dimension, although you can pass as argument too, is useless for the C compiler (even for sizeof operator, when applied over array passed as argument will always treat is as a pointer to first element).
A: You can do this with any data type. Simply make it a pointer-to-pointer:
typedef struct {
int myint;
char* mystring;
} data;
data** array;
But don't forget you still have to malloc the variable, and it does get a bit complex:
//initialize
int x,y,w,h;
w = 10; //width of array
h = 20; //height of array
//malloc the 'y' dimension
array = malloc(sizeof(data*) * h);
//iterate over 'y' dimension
for(y=0;y<h;y++){
//malloc the 'x' dimension
array[y] = malloc(sizeof(data) * w);
//iterate over the 'x' dimension
for(x=0;x<w;x++){
//malloc the string in the data structure
array[y][x].mystring = malloc(50); //50 chars
//initialize
array[y][x].myint = 6;
strcpy(array[y][x].mystring, "w00t");
}
}
The code to deallocate the structure looks similar - don't forget to call free() on everything you malloced! (Also, in robust applications you should check the return of malloc().)
Now let's say you want to pass this to a function. You can still use the double pointer, because you probably want to do manipulations on the data structure, not the pointer to pointers of data structures:
int whatsMyInt(data** arrayPtr, int x, int y){
return arrayPtr[y][x].myint;
}
Call this function with:
printf("My int is %d.\n", whatsMyInt(array, 2, 4));
Output:
My int is 6.
A:
In C can I pass a multidimensional array to a function as a single argument when I don't know what the dimensions of the array are going to be?
No
If by "single argument" you mean passing just the array without passing the array dimensions, no you can't. At least not for true multidimensional arrays.
You can put the dimension[s] into a structure along with the array and claim you're passing a "single argument", but that's really just packing multiple values into a single container and calling that container "one argument".
You can pass an array of known type and number of dimensions but unknown size by passing the dimensions themselves and the array like this:
void print2dIntArray( size_t x, size_t y, int array[ x ][ y ] )
{
for ( size_t ii = 0, ii < x; ii++ )
{
char *sep = "";
for ( size_t jj = 0; jj < y; jj++ )
{
printf( "%s%d", sep, array[ ii ][ jj ] );
sep = ", ";
}
printf( "\n" );
}
}
You would call that function like this:
int a[ 4 ][ 5 ];
int b[ 255 ][ 16 ];
...
print2dIntArray( 4, 5, a );
....
printt2dIntArray( 255, 16, b );
Similarly, a 3-dimensional array of, for example, a struct pixel:
void print3dPixelArray( size_t x, size_t y, size_t z, struct pixel pixelArray[ x ][ y ][ z ] )
{
...
}
or a 1-dimensional double array:
void print1dDoubleArray( size_t x, double doubleArray[ x ] )
{
...
}
BUT...
However, it can be possible to pass "arrays of pointers to arrays of pointers to ... an array of type X" constructs that are often mislabeled as a "multidimensional array" as a single argument as long as the base type X has an sentinel value that can be used to indicate the end of the final, lowest-level single-dimensional array of type X.
For example, the char **argv value passed to main() is a pointer to an array of pointers to char. The initial array of char * pointers ends with a NULL sentinel value, while each char array referenced by the array of char * pointers ends with a NUL character value of '\0'.
For example, if you can use NAN as a sentinel value because actual data won't ever be a NAN, you could print a double ** like this:
void printDoubles( double **notAnArray )
{
while ( *notAnArray )
{
char *sep = "";
for ( size_t ii = 0; ( *notAnArray )[ ii ] != NAN; ii++ )
{
printf( "%s%f", sep, ( *notAnArray )[ ii ] );
sep = ", ";
}
notAnArray++;
}
}
A: int matmax(int **p, int dim) // p- matrix , dim- dimension of the matrix
{
return p[0][0];
}
int main()
{
int *u[5]; // will be a 5x5 matrix
for(int i = 0; i < 5; i++)
u[i] = new int[5];
u[0][0] = 1; // initialize u[0][0] - not mandatory
// put data in u[][]
printf("%d", matmax(u, 0)); //call to function
getche(); // just to see the result
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: How to enable Full-text Indexing in SQL Server 2005 Express? I am trying to enable Full-text indexing in SQL Server 2005 Express. I am running this on my laptop with Vista Ultimate.
I understand that the standard version of SQL Server Express does not have full-text indexing. I have already downloaded and installed "Microsoft SQL Server 2005 Express Edition with Advanced Services Service Pack 2" (download).
I have also ensured that both the "SQL Server (instance)" and "SQL Server FullText Search (instance)" services are running on the same account which is "Network Service".
I have also selected the option to "Use full-text indexing" in the Database Properties > Files area.
I can run the sql query "SELECT fulltextserviceproperty('IsFulltextInstalled');" and return 1.
The problem I am having is that when I have my table open in design view and select "Manage FullText Index"; the full-text index window displays the message...
"Creation of the full-text index is not available. Check that you have the correct permissions or that full-text catalogs are defined."
Any ideas on what to check or where to go next?
A: All I needed to get full-text indexing to work was the...
CREATE FULLTEXT CATALOG [myFullText] WITH ACCENT_SENSITIVITY = ON
After that I could run a CREATE FULLTEXT INDEX query or use the Manage FullText Index in MSSQL Management Studio.
A: sp_fulltext_database 'enable'
CREATE FULLTEXT CATALOG [myFullText]
WITH ACCENT_SENSITIVITY = ON
CREATE FULLTEXT INDEX ON [dbo].[tblName] KEY INDEX [PK_something] ON [myFullText] WITH CHANGE_TRACKING AUTO
ALTER FULLTEXT INDEX ON [dbo].[otherTable] ADD ([Text])
ALTER FULLTEXT INDEX ON [dbo].[teyOtherTable] ENABLE
A: Use sql server management studio.
Login as admin to your windows account.
Then select database and right click on database in sql server management studio and select Define Full Text Index and you are guided throughout the process by management studio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What methods of caching, other than to file or database, are available? Currently I know of only two ways to cache data (I use PHP but I assume that the same will apply to most languages).
*
*Save the cache to a file
*Save the cache to a large DB field
Are there any other (perhaps better) ways of caching or is it really just this simple?
A: Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache:
*
*Accessing the Data Base where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with AdoDB for example.)
*Extracting calculations from loops in the code so you don't compute the same value multiple times. Here your third way: storing results in the session for the user.
*Precompiling the PHP code with an extension like APC Cache. This way you don't have to compile the same PHP code for every request.
*The page sent to the user making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like Squid.
*Prefetching and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc.
From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like XDebug.
Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once"
Use a simple tool like YSlow to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript)
A: You can also cache in memory which is much more efficient. Try memcached.
A: Seconding memcached, does the simple stuff well and can go distributive and all that jazz if you need it too
A: If you're using Apache, you can use mod_rewrite to statically cache your web pages. Lets say you're using PHP, and you have a request for "/somepage.php". In your .htaccess file you put the following:
RewriteEngine on
RewriteCond %{QUERY_STRING} ^$ # let's not cache urls with queries
RewriteCond %{REQUEST_METHOD} ^GET$ # or POST/PUT/DELETE requests
RewriteCond static_cache/%{REQUEST_URI} -s # Check that this file exists and is > 0 bytes
RewriteRule (^.*$) static_cache$1 [L] # If all the conditions are met, we rewrite this request to hit the static cache instead
If your cache turns up empty, the request is handled by your php script as usual, so now it's simply a matter of making your php script store the resulting html in the cache. The simplest way to do this is using another htaccess rule to prepend end append a couple of php files to all your php requests (this might or might not be a good idea, depending on your application):
php_value auto_prepend_file "pre_cache.php"
php_value auto_append_file "post_cache.php"
Then you'd do something like this:
pre_cache.php:
ob_start();
post_cache.php:
$result = ob_get_flush();
if(!$_SERVER['QUERY_STRING']) { # Again, we're not caching query string requests
file_put_contents("static_cache/" + __FILE__, $result);
}
With some additional regular expressions in the .htaccess file we could probably start caching query string requests as well, but I'll leave that as an exercise for the reader :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: SVN merge merged extra stuff I just did a merge using something like:
svn merge -r 67212:67213 https://my.svn.repository/trunk .
I only had 2 files, one of which is a simple ChangeLog. Rather than just merging my ChangeLog changes, it actually pulled mine plus some previous ones that were not in the destination ChangeLog. I noticed there was a conflict when I executed --dry-run, so I updated ChangeLog, and there was still a conflict (and I saw the conflict when I did the actual merge).
I then later diffed on the file I was merging from:
svn diff -r 67212:67213 ChangeLog
And I see just the changes I had made, so I know that extra changes didn't get in there somehow.
This makes me worried that merge is not actually just taking what I changed, which is what I would have expected. Can anybody explain what happened?
UPDATE: In response to NilObject:
So, I have 2 files changed, only ChangeLog is relevant, the other merged fine. When I go to my regular trunk checkout, I do the diff command above and see:
Index: ChangeLog
===================================================================
--- ChangeLog (revision 67212)
+++ ChangeLog (revision 67213)
@@ -1,3 +1,7 @@
+2008-08-06 Mike Stone <myemail>
+
+ * changed_file: Details.
+
2008-08-06 Someone Else <their_email>
* theirChanges: Details.
After my merge of the previous changes, the diff of ChangeLog looks like this:
Index: ChangeLog
===================================================================
--- ChangeLog (revision 67215)
+++ ChangeLog (working copy)
@@ -1,3 +1,14 @@
+<<<<<<< .working
+=======
+2008-08-06 Mike Stone <myemail>
+
+ * changed_file: Details.
+
+2008-08-06 Someone Else <their_email>
+
+ * theirChanges: Details.
+
+>>>>>>> .merge-right.r67213
2008-08-05 Someone Else2 <their2_email>
* olderChange: Details.
Note that the entry that was incorrectly pulled in was not in the file I am merging it to, but yet it was not one of my changes and shouldn't have been merged anyways. It was easy to fix (remove the extra lines that weren't part of my changes), but it still makes me worry about merging automatically in SVN.
A: This only happens with conflicts - basically svn tried to merge the change in, but (roughly speaking) saw the change as:
Add
2008-08-06 Mike Stone <myemail>
* changed_file: Details.
before
2008-08-06 Someone Else <their_email>
And it couldn't find the Someone Else line while doing the merge, so chucked that bit in for context when putting in the conflict. If it was a non-conflicting merge only the changes you expected would have been applied.
A: There's not really enough information to go on here.
svn merge -r 67212:67213 https://my.svn.repository/trunk .
will merge any files changed in the revision 67212 in the folder /trunk on the repository and merge them into your current working directory. If you do:
svn log -r 67212
What files does it show changed? Merge will only pull changes from the first argument, and apply them to the second. It does not upload back to the server in the first argument.
If this doesn't answer your question, could you post more details as to what exactly is happening?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: What code analysis tools do you use for your Java projects? What code analysis tools do you use on your Java projects?
I am interested in all kinds
*
*static code analysis tools (FindBugs, PMD, and any others)
*code coverage tools (Cobertura, Emma, and any others)
*any other instrumentation-based tools
*anything else, if I'm missing something
If applicable, also state what build tools you use and how well these tools integrate with both your IDEs and build tools.
If a tool is only available a specific way (as an IDE plugin, or, say, a build tool plugin) that information is also worth noting.
A: I use the static analysis built into IntelliJ IDEA. Perfect integration.
I use the code coverage built into Intellij IDEA (based on EMMA). Again, perfect integration.
This integrated solution is reliable, powerful, and easy-to-use compared to piecing together tools from various vendors.
A: For static analysis tools I often use CPD, PMD, FindBugs, and Checkstyle.
CPD is the PMD "Copy/Paste Detector" tool. I was using PMD for a little while before I noticed the "Finding Duplicated Code" link on the PMD web page.
I'd like to point out that these tools can sometimes be extended beyond their "out-of-the-box" set of rules. And not just because they're open source so that you can rewrite them. Some of these tools come with applications or "hooks" that allow them to be extended. For example, PMD comes with the "designer" tool that allows you to create new rules. Also, Checkstyle has the DescendantToken check that has properties that allow for substantial customization.
I integrate these tools with an Ant-based build. You can follow the link to see my commented configuration.
In addition to the simple integration into the build, I find it helpful to configure the tools to be somewhat "integrated" in a couple of other ways. Namely, report generation and warning suppression uniformity. I'd like to add these aspects to this discussion (which should probably have the "static-analysis" tag also): how are folks configuring these tools to create a "unified" solution? (I've asked this question separately here)
First, for warning reports, I transform the output so that each warning has the simple format:
/absolute-path/filename:line-number:column-number: warning(tool-name): message
This is often called the "Emacs format," but even if you aren't using Emacs, it's a reasonable format for homogenizing reports. For example:
/project/src/com/example/Foo.java:425:9: warning(Checkstyle):Missing a Javadoc comment.
My warning format transformations are done by my Ant script with Ant filterchains.
The second "integration" that I do is for warning suppression. By default, each tool supports comments or an annotation (or both) that you can place in your code to silence a warning that you want to ignore. But these various warning suppression requests do not have a consistent look which seems somewhat silly. When you're suppressing a warning, you're suppressing a warning, so why not always write "SuppressWarning?"
For example, PMD's default configuration suppresses warning generation on lines of code with the string "NOPMD" in a comment. Also, PMD supports Java's @SuppressWarnings annotation. I configure PMD to use comments containing "SuppressWarning(PMD." instead of NOPMD so that PMD suppressions look alike. I fill in the particular rule that is violated when using the comment style suppression:
// SuppressWarnings(PMD.PreserveStackTrace) justification: (false positive) exceptions are chained
Only the "SuppressWarnings(PMD." part is significant for a comment, but it is consistent with PMD's support for the @SuppressWarning annotation which does recognize individual rule violations by name:
@SuppressWarnings("PMD.CompareObjectsWithEquals") // justification: identity comparision intended
Similarly, Checkstyle suppresses warning generation between pairs of comments (no annotation support is provided). By default, comments to turn Checkstyle off and on contain the strings CHECKSTYLE:OFF and CHECKSTYLE:ON, respectively. Changing this configuration (with Checkstyle's "SuppressionCommentFilter") to use the strings "BEGIN SuppressWarnings(CheckStyle." and "END SuppressWarnings(CheckStyle." makes the controls look more like PMD:
// BEGIN SuppressWarnings(Checkstyle.HiddenField) justification: "Effective Java," 2nd ed., Bloch, Item 2
// END SuppressWarnings(Checkstyle.HiddenField)
With Checkstyle comments, the particular check violation (HiddenField) is significant because each check has its own "BEGIN/END" comment pair.
FindBugs also supports warning generation suppression with a @SuppressWarnings annotation, so no further configuration is required to achieve some level of uniformity with other tools. Unfortunately, Findbugs has to support a custom @SuppressWarnings annotation because the built-in Java @SuppressWarnings annotation has a SOURCE retention policy which is not strong enough to retain the annotation in the class file where FindBugs needs it. I fully qualify FindBugs warnings suppressions to avoid clashing with Java's @SuppressWarnings annotation:
@edu.umd.cs.findbugs.annotations.SuppressWarnings("UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR")
These techniques makes things look reasonably consistent across tools. Note that having each warning suppression contain the string "SuppressWarnings" makes it easy to run a simple search to find all instances for all tools over an entire code base.
A: Checkstyle is another one I've used at a previous company... it's mainly for style checking, but it can do some static analysis too. Also, Clover for code coverage, though be aware it is not a free tool.
A: We are using FindBugs and Checkstyle as well as Clover for Code Coverage.
I think it's important to have some kind of static analysis, supporting your development. Unfortunately it's still not widely spread that these tools are important.
A: I use a combination of Cobertura, Checkstyle, (Ecl)Emma and Findbugs.
EclEmma is an awesome Eclipse plugin that shows the code coverage by coloring the java source in the editor (screenshot) - the coverage is generated by running a JUnit test. This is really useful when you are trying to figure out which lines are covered in a particular class, or if you want to see just which lines are covered by a single test. This is much more user friendly and useful than generating a report and then looking through the report to see which classes have low coverage.
The Checkstyle and Findbugs Eclipse plugins are also useful, they generate warnings in the editor as you type.
Maven2 has report plugins that work with the above tools to generate reports at build time. We use this to get overall project reports, which are more useful when you want aggregate numbers. These are generated by our CI builds, which run using Continuum.
A: All of the following we use and integrate easiy in both our Maven 2.x builds and Eclipse/RAD 7:
*
*Testing - JUnit/TestNG
*Code analysis - FindBugs, PMD
*Code coverage - Clover
In addition, in our Maven builds we have:
*
*JDepend
*Tag checker (TODO, FIXME, etc)
Furthermore, if you're using Maven 2.x, CodeHaus has a collection of handy Maven plugins in their Mojo project.
Note: Clover has out-of-the-box integration with the Bamboo CI server (since they're both Atlassian products). There are also Bamboo plugins for FindBugs, PMD, and CheckStyle but, as noted, the free Hudson CI server has those too.
A: We use FindBugs and JDepend integrated with Ant. We use JUnit but we're not using any coverage tool.
I'm not using it integrated to Rational Application Developer (the IDE I'm using to develop J2EE applications) because I like how neat it looks when you run javac in the Windows console. :P
A: I've had good luck with Cobertura. It's a code coverage tool which can be executed via your ant script as part of your normal build and can be integrated into Hudson.
A: Our team use PMD and Cobertura, actually our projects are maven projects and there is very simple to include plug ins for code analysis. The real question would be for specific project which analysis you need to use, my opinion is that it's you couldn't use the same plugins for each project.
A: in our project we use Sonar in front of checkstyle, pmd.... together with the CI (Bamboo, Hudson) we get also a nice history of our source quality and what directing we go. I do like Sonar, because you one central tool in the CI Stack that does it for you, and you can easy customize the rules for each project.
A: Structure 101 is good at code analysis and finding the cyclic package dependencies.
A: I am looking for many answers to learn about new tools and consolidate this knowledge in a one question/thread, so I doubt there will be 1 true answer to this question.
My answer to my own question is that we use:
*
*Findbugs to look for common errors bad/coding - run from maven, and also integrates easily into Eclipse
*Cobertura for our coverage reports - run from maven
Hudson also has a task-scanner plugin that will display a count of your TODO and FIXMEs, as well as show where they are in the source files.
All are integrated with Maven 1.x in our case and tied into Hudson, which runs our builds on check-in as well as extra things nightly and weekly. Hudson trend graphs our JUnit tests, coverage, findbugs, as well as open tasks. There is also a Hudson plugin that reports and graphs our compile warnings. We also have several performance tests with their own graphs of performance and memory use over time using the Hudson plots plugin as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117"
} |
Q: What program can I use to generate diagrams of SQL view/table structure? I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views.
Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure.
What's the best program you've used for such problems?
A: I am a big fan of Embarcadero's ER/Studio. It is very powerful and produces excellent on-screen as well as printed results. They have a free trial as well, so you should be able to get in and give it a shot without too much strife.
Good luck!
A: Toad Data Modeller from Quest does a nice job on this and is reasonably priced. Embarcadero E/R studio is good too, as Bruce mentioned.
A: OP asked about diagramming views and view dependencies, SQL Management Studio and Enterprise Manager doesn't allow you to diagram views. I can't vouch for the other tools.
The LINQ to SQL designer for Visual Studio does allow you to drop views on the design surface but there isn't a easy way to model the dependencies between the views. I'm not sure which tool has this type of diagramming functionality. You could take a look at Red Gate's SQLDoc tool but it just provides text based output.
A: If you are talking about MS SQL Server tables, I like the diagram support in SQL Server Management Studio. You just drag the tables from the explorer onto the canvas, and they are laid out for you along with lines for relationships. You'll have to do some adjusting by hand for the best looking diagrams, but it is a decent way to get diagrams.
A: I upmodded Mark's post about Toad Data Modeler and wanted to point out that they have a beta version that is fully functional and free. The only downsides are the occasional bug and built in expiration (typically around the time a new beta is available), but for this poor bloke it does wonders until I can get my boss to chip in for a license.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: SVN Client Ignore Pattern for VB.NET Solutions What is the best SVN Ignore Pattern should TortoiseSVN have for a VB.NET solution?
A: I always add Thumbs.db in as well, because I hate having those files versioned. Probably more of an issue for web developers
A: this is what I use for C# w/resharper, should work just the same with vb.net:
build deploy */bin */bin/* obj *.dll *.pdb *.user *.suo _ReSharper* *.resharper* bin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do I use Java to read from a file that is actively being written to? I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time.
I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then?
My intuition is currently pushing me towards BufferedStreams. Is this the way to go?
A: You might also take a look at java channel for locking a part of a file.
http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html
This function of the FileChannel might be a start
lock(long position, long size, boolean shared)
An invocation of this method will block until the region can be locked
A: I totally agree with Joshua's response, Tailer is fit for the job in this situation. Here is an example :
It writes a line every 150 ms in a file, while reading this very same file every 2500 ms
public class TailerTest
{
public static void main(String[] args)
{
File f = new File("/tmp/test.txt");
MyListener listener = new MyListener();
Tailer.create(f, listener, 2500);
try
{
FileOutputStream fos = new FileOutputStream(f);
int i = 0;
while (i < 200)
{
fos.write(("test" + ++i + "\n").getBytes());
Thread.sleep(150);
}
fos.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
private static class MyListener extends TailerListenerAdapter
{
@Override
public void handle(String line)
{
System.out.println(line);
}
}
}
A: Could not get the example to work using FileChannel.read(ByteBuffer) because it isn't a blocking read. Did however get the code below to work:
boolean running = true;
BufferedInputStream reader = new BufferedInputStream(new FileInputStream( "out.txt" ) );
public void run() {
while( running ) {
if( reader.available() > 0 ) {
System.out.print( (char)reader.read() );
}
else {
try {
sleep( 500 );
}
catch( InterruptedException ex ) {
running = false;
}
}
}
}
Of course the same thing would work as a timer instead of a thread, but I leave that up to the programmer. I'm still looking for a better way, but this works for me for now.
Oh, and I'll caveat this with: I'm using 1.4.2. Yes I know I'm in the stone ages still.
A: The answer seems to be "no" ... and "yes". There seems to be no real way to know if a file is open for writing by another application. So, reading from such a file will just progress until content is exhausted. I took Mike's advice and wrote some test code:
Writer.java writes a string to file and then waits for the user to hit enter before writing another line to file. The idea being that it could be started up, then a reader can be started to see how it copes with the "partial" file. The reader I wrote is in Reader.java.
Writer.java
public class Writer extends Object
{
Writer () {
}
public static String[] strings =
{
"Hello World",
"Goodbye World"
};
public static void main(String[] args)
throws java.io.IOException {
java.io.PrintWriter pw =
new java.io.PrintWriter(new java.io.FileOutputStream("out.txt"), true);
for(String s : strings) {
pw.println(s);
System.in.read();
}
pw.close();
}
}
Reader.java
public class Reader extends Object
{
Reader () {
}
public static void main(String[] args)
throws Exception {
java.io.FileInputStream in = new java.io.FileInputStream("out.txt");
java.nio.channels.FileChannel fc = in.getChannel();
java.nio.ByteBuffer bb = java.nio.ByteBuffer.allocate(10);
while(fc.read(bb) >= 0) {
bb.flip();
while(bb.hasRemaining()) {
System.out.println((char)bb.get());
}
bb.clear();
}
System.exit(0);
}
}
No guarantees that this code is best practice.
This leaves the option suggested by Mike of periodically checking if there is new data to be read from the file. This then requires user intervention to close the file reader when it is determined that the reading is completed. Or, the reader needs to be made aware the content of the file and be able to determine and end of write condition. If the content were XML, the end of document could be used to signal this.
A: There are a Open Source Java Graphic Tail that does this.
https://stackoverflow.com/a/559146/1255493
public void run() {
try {
while (_running) {
Thread.sleep(_updateInterval);
long len = _file.length();
if (len < _filePointer) {
// Log must have been jibbled or deleted.
this.appendMessage("Log file was reset. Restarting logging from start of file.");
_filePointer = len;
}
else if (len > _filePointer) {
// File must have had something added to it!
RandomAccessFile raf = new RandomAccessFile(_file, "r");
raf.seek(_filePointer);
String line = null;
while ((line = raf.readLine()) != null) {
this.appendLine(line);
}
_filePointer = raf.getFilePointer();
raf.close();
}
}
}
catch (Exception e) {
this.appendMessage("Fatal error reading log file, log tailing has stopped.");
}
// dispose();
}
A: You can't read a file which is opened from another process using FileInputStream, FileReader or RandomAccessFile.
But using FileChannel directly will work:
private static byte[] readSharedFile(File file) throws IOException {
byte buffer[] = new byte[(int) file.length()];
final FileChannel fc = FileChannel.open(file.toPath(), EnumSet.of(StandardOpenOption.READ));
final ByteBuffer dst = ByteBuffer.wrap(buffer);
fc.read(dst);
fc.close();
return buffer;
}
A: If you want to read a file while it is being written and only read the new content then following will help you achieve the same.
To run this program you will launch it from command prompt/terminal window and pass the file name to read. It will read the file unless you kill the program.
java FileReader c:\myfile.txt
As you type a line of text save it from notepad and you will see the text printed in the console.
public class FileReader {
public static void main(String args[]) throws Exception {
if(args.length>0){
File file = new File(args[0]);
System.out.println(file.getAbsolutePath());
if(file.exists() && file.canRead()){
long fileLength = file.length();
readFile(file,0L);
while(true){
if(fileLength<file.length()){
readFile(file,fileLength);
fileLength=file.length();
}
}
}
}else{
System.out.println("no file to read");
}
}
public static void readFile(File file,Long fileLength) throws IOException {
String line = null;
BufferedReader in = new BufferedReader(new java.io.FileReader(file));
in.skip(fileLength);
while((line = in.readLine()) != null)
{
System.out.println(line);
}
in.close();
}
}
A: Not Java per-se, but you may run into issues where you have written something to a file, but it hasn't been actually written yet - it might be in a cache somewhere, and reading from the same file may not actually give you the new information.
Short version - use flush() or whatever the relevant system call is to ensure that your data is actually written to the file.
Note I am not talking about the OS level disk cache - if your data gets into here, it should appear in a read() after this point. It may be that the language itself caches writes, waiting until a buffer fills up or file is flushed/closed.
A: I've never tried it, but you should write a test case to see if reading from a stream after you have hit the end will work, regardless of if there is more data written to the file.
Is there a reason you can't use a piped input/output stream? Is the data being written and read from the same application (if so, you have the data, why do you need to read from the file)?
Otherwise, maybe read till end of file, then monitor for changes and seek to where you left off and continue... though watch out for race conditions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "109"
} |
Q: Best way to cache data I am in the process of figuring out a cache strategy for our current setup, currently have multiple web servers and wanted to know what is the best way to cache data in this environment. I have done research about MemCache and the native asp.net caching but wanted to get some feedback first. Should I go with a Linux box if I use MemCache or a win32 port of MemCache.
A: What about checking out Microsoft Velocity?
Another option if you don't want to start using Microsoft CTP-ware is to check out Nache which allows distributed cache/session state management
A: Dare Obasanjo has a pretty good blog post about this topic. You really need to assess what it is you're caching, why you're caching it and what your needs are before you can make a decision on a caching strategy.
A: http://www.danga.com/memcached/
worked awesome for me and have heard nothing but goodness about it
A: Another open source choice other than memcached probably worth looking into is Shared Cache. I haven't played with it. But it says to have c# native implementation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: ConfigurationManager.AppSettings Performance Concerns I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly.
Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing .Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read).
If anyone has experience with this, I would greatly appreciate the input.
Update: I should probably clarify a few points.
This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application.
According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.)
Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits.
Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance.
A: Check out SQLite, it seems like a good option for this particular scenario.
A: Dylan,
Don't use the application config file for this purpose, use a SQL DB (SQLite, MySQL, MSSQL, whatever) because you'll have to worry less about concurrency issues during reads and writes to the config file.
You'll also have better flexibility in the type of data you want to store. The appSettings section is just a key/value list which you may outgrow as time passes and as the app matures. You could use custom config sections but then you're into a new problem area when it comes to the design.
A: The appSettings isn't really meant for what you are trying to do.
When your .NET application starts, it reads in the app.config file, and caches its contents in memory. For that reason, after you write to the app.config file, you'll have to somehow force the runtime to re-parse the app.config file so it can cache the settings again. This is unnecessary
The best approach would be to use a database to store your configuration settings.
Barring the use of a database, you could easily setup an external XML configuration file. When your application starts, you could cache its contents in a NameValueCollection object or HashTable object. As you change/add settings, you would do it to that cached copy. When your application shuts down, or at an appropriate time interval, you can write the cache contents back out to file.
A: since you're using a winforms app, if it's in .net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. This article on MSDN has a pretty good introduction into this
If you're still worried about performance then take a look at SQL Compact Edition which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to make it work with Linq
A: Someone correct me if I'm wrong, but I don't think that AppSettings is typically meant to be used for these type of configuration settings. Normally you would only put in settings that remain fairly static (database connection strings, file paths, etc.). If you want to store customizable user settings, it would be better to create a separate preferences file, or ideally store those settings in a database.
A: I would not use config files for storing user data. Use a db.
A: Could I ask why you're not saving the user's settings in a database?
Generally, I save application settings that are changed very infrequently in the appSettings section (the default email address error logs are sent to, the number of minutes after which you are automatically logged out, etc.) The scope of this really is at the application, not at the user, and is generally used for deployment settings.
A: one thing I would look at doing is caching the appsettings on a read, then flushing the settings from the cache on the write which should minimize the amount of actual load the server has to deal with for processing the appSettings.
Also, if possible, look at breaking the appSettings up into configSections so you can read write and cache related settings.
Having said all that, I would seriously consider looking at storing these values in a database as you seem to actually be storing user preferences, and not application settings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: What is a good barebones CMS or framework? I'm about to start a project for a customer who wants CMS-like functionality. They want users to be able to log in, modify a profile, and a basic forum. They also wish to be able to submit things to a front page.
Is there a framework or barebones CMS that I could expand on or tailor to my needs? I don't need anything as feature-rich or fancy as Drupal or Joomla. I would actually prefer a framework as opposed to a pre-packaged CMS.
I am confident I could code all this from scratch, but would prefer not to, as something like a framework would significantly cut down on my time spent coding, and more on design and layout.
Edit: I should have been more specific. I'm looking for a Content Management System that will be run on a Debian server. So no .net preferably.
I think i may end up going with Drupal, and only adding modules that I need. Turbogears looks a bit daunting, and i'm still not quite sure what it does after it's 20 minute intro video...
TinyCMS doesn't look like it's been touched since... 2000?!?
A: I think the best is CMS Made Simple. Seems like drupal takes awhile to customize.
http://www.cmsmadesimple.org/
A: tinyCMS is about as barebones as you can get. (edit: fixed link, I had gotten a little click happy and linked to the wrong thing)
@modesty, I would definitely NOT use SharePoint, as it is anything but barebones. It is a fairly expensive product (especially when compared to the many free alternatives), and it has quite the learning curve to do anything interesting.
A: Woo, another Debian nut!
I think you need to be a bit more specific here, Forum != CMS. Is this for internal company or external customer use? What language(s) do you know/prefer? There's no point in recommending a Perl or PHP framework if your language of choice is Ruby. Do you need to plan for scalability?
What's wrong with Joomla or Drupal? I would argue that they can be successfully used on small sites. Maybe a framework isn't what you're looking for, maybe you just need a library or two (eg. PEAR?). If you need something smaller, maybe writing your own backend library that you can reuse for future projects would be a better solution.
For a one-size-fits-all framework have a look at Turbogears. ("it's a big hammer, that makes every problem look like a nail")
A: I've been obsessing over TikiWiki lately. Although it has "wiki" in the name, its full name is "TikiWiki CMS/Groupware" and it's an interesting piece of software. It has a real everything and the kitchen sink feel. It includes support for wiki, blogs, articles, forums, and files out of the box (and a ton of other stuff too). I think the real appeal to me is that most of the stuff can all be integrated together, wiki pages can include other wiki pages and articles (which is more useful than you might think). It's in RC stage for release 2.0 and is still missing a ton of features, but I think I might keep using it and contribute some of the features that are missing, it's a really interesting base right now.
The Mozilla support site is implemented using TikiWiki, for an example of a really beautiful implementation.
A: Drupal's include system should keep everything relatively lightweight as long as you only include what you need. Despite the fact that it comes with a smattering of modules, what you choose to enable is all that will be included at runtime. If you have to get under the hood and make modifications, I'm also a firm believer that Drupal is a more friendly and elegant system than Joomla. We use Drupal at my work-as much as a framework as a CMS-and it has proven pretty reliable in keeping development practices at a high level.
A: I realize I'm a couple years late to the party but I was looking for something like this myself and ran across this post while doing Google searches for 'barebones cms'. Along with this post, this turns up:
http://barebonescms.com/
There is also a forum on that site.
A similar combination could probably meet or exceed all of your criteria. Although, as others pointed out, you weren't particularly specific on the details.
While the original author is probably long gone, hopefully someone else finds this useful.
A: I would suggest PmWiki, it's something between a framework/wiki. By default there aren't even users, just different passwords, for different tasks, but using PmWiki Cookbook 'recipes' You can add additional functionality.
You can check their philosophy to get main idea what it's about.
A: If you want a Rails solution, Radiant CMS is a good option. It's simple, elegant, extensible and, of course, comes with all of the benefits of being based on Ruby on Rails.
A: if you are looking .net you can take a look at umbraco, haven't done much with it (company i work for wanted much more functionality so went with something else) but it seemed lightweight.
Edit : if the customer wants a tiny CMS with a forum, I would still probably just go Drupal with phpBB or simple machines forum, almost positive they can share logins. Plus tomorrow the customer is going to want more and Drupal might save you some work there.
A: Might want to check out Drupal.
Here are the details of the technology stack that it uses.
I have never used it so I can't vouch for the quality etc but definitely worth a look.
A: Expression Engine is fantastic. It's free to download and try but you must purchase a license if you are making a profit with it.
A: how about you use drupal but scale down and code it according to your needs.
definitely will be faster than code-from-scratch-with-framework
A: I have been working with Joomla for some time and I believe it one of the best CMS for starting off a Website. I have tried others a lot, But Joomla is better because it has Numerous Extentions (Components , Modules) and also its very Easy to Customize. You could also look at the Community Builder Extension for joomla.Other requirement like Chnage Fronpage Articles etc is a Breeze....
joomla.org
For some reason Joomla Does not Suit you try Drupal.
A: WordPress actually has a forum plugin - it's nothing fancy but it's there. It handles user management et al and has a big community for plugins and themes. I think it is probably the easiest CMS to install & run (I've done some legwork here). There are plugins that update the core & plugins automatically (take that Drupal). I've tested these and they seem pretty solid. As usual - backup beforehand.
For .NET MojoPortal looks pretty good and is lighter than DNN. I saw the edit but thought I'd include this anyway since it looks like it's worth checking out.
Drupal is a language unto its own - I wouldn't tackle it unless you're going to do so with some regularity, otherwise it's just another different framework to learn. The uplink into my brain is at capacity already so I gently pushed it aside. The themes tend to look the same too.
Joomla may suit your users for usability.
I'd go for a pre-made framework myself because it would have a community and expansion capacity. What your client wants today will pale into insignificance tomorrow.
A: Wordpress is a very powerful but simple CMS.
bbPress is a very simple but integrated forum (easy, Wordpress user account integration with cookies and all).
Since you have programming experience you may find Wordpress to be the perfect match (PHP, MySQL) with plenty of plugins and hooks to help you achieve what you need. For example, there is a featured posts plugin that will put selected content on the front page.
A: I need to jump on the Umbraco bandwagon here. As far as ease of use from a developer standpoint goes, there is nothing easier than umbraco and v. 4 has full master page support and a tone of other stuff... and it's free.
A: For windows take a look at the DotNetNuke is asp.net based, free and open source and easily skinned and modified, there is also a thriving market in add-on modules. In addition most hosting companies offer it as a pre-installed application
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Graph serialization I'm looking for a simple algorithm to 'serialize' a directed graph. In particular I've got a set of files with interdependencies on their execution order, and I want to find the correct order at compile time. I know it must be a fairly common thing to do - compilers do it all the time - but my google-fu has been weak today. What's the 'go-to' algorithm for this?
A: Topological Sort (From Wikipedia):
In graph theory, a topological sort or
topological ordering of a directed
acyclic graph (DAG) is a linear
ordering of its nodes in which each
node comes before all nodes to which
it has outbound edges. Every DAG has
one or more topological sorts.
Pseudo code:
L ← Empty list where we put the sorted elements
Q ← Set of all nodes with no incoming edges
while Q is non-empty do
remove a node n from Q
insert n into L
for each node m with an edge e from n to m do
remove edge e from the graph
if m has no other incoming edges then
insert m into Q
if graph has edges then
output error message (graph has a cycle)
else
output message (proposed topologically sorted order: L)
A: I would expect tools that need this simply walk the tree in a depth-first manner and when they hit a leaf, just process it (e.g. compile) and remove it from the graph (or mark it as processed, and treat nodes with all leaves processed as leaves).
As long as it's a DAG, this simple stack-based walk should be trivial.
A: I've come up with a fairly naive recursive algorithm (pseudocode):
Map<Object, List<Object>> source; // map of each object to its dependency list
List<Object> dest; // destination list
function resolve(a):
if (dest.contains(a)) return;
foreach (b in source[a]):
resolve(b);
dest.add(a);
foreach (a in source):
resolve(a);
The biggest problem with this is that it has no ability to detect cyclic dependencies - it can go into infinite recursion (ie stack overflow ;-p). The only way around that that I can see would be to flip the recursive algorithm into an interative one with a manual stack, and manually check the stack for repeated elements.
Anyone have something better?
A: If the graph contains cycles, how can there exist allowed execution orders for your files?
It seems to me that if the graph contains cycles, then you have no solution, and this
is reported correctly by the above algorithm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: How to learn ADO.NET I need to learn ADO.NET to build applications based on MS Office. I have read a good deal about ADO.NET in the MSDN Library, but everything seems rather messy to me.
What are the basics one must figure out when using ADO.NET? I think a few key words will suffice to let me organize my learning.
A: There are three key components (assuming ur using SQL server):
*
*SQLConnection
*SqlCommand
*SqlDataReader
(if you're using something else, replace Sql with "Something", like MySqlConnection, OracleCommand)
Everything else is just built on top of that.
Example 1:
using (SqlConnection connection = new SqlConnection("CONNECTION STRING"))
using (SqlCommand command = new SqlCommand())
{
command.commandText = "SELECT Name FROM Users WHERE Status = @OnlineStatus";
command.Connection = connection;
command.Parameters.Add("@OnlineStatus", SqlDbType.Int).Value = 1; //replace with enum
connection.Open();
using (SqlDataReader dr = command.ExecuteReader))
{
List<string> onlineUsers = new List<string>();
while (dr.Read())
{
onlineUsers.Add(dr.GetString(0));
}
}
}
Example 2:
using (SqlConnection connection = new SqlConnection("CONNECTION STRING"))
using (SqlCommand command = new SqlCommand())
{
command.commandText = "DELETE FROM Users where Email = @Email";
command.Connection = connection;
command.Parameters.Add("@Email", SqlDbType.VarChar, 100).Value = "user@host.com";
connection.Open();
command.ExecuteNonQuery();
}
A: Another way of getting a command object is to call connection.CreateCommand().
That way you shouldn't have to set the Connection property on the command object.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Windows Equivalent of 'nice' Is there a Windows equivalent of the Unix command, nice?
I'm specifically looking for something I can use at the command line, and not the "Set Priority" menu from the task manager.
My attempts at finding this on Google have been thwarted by those who can't come up with better adjectives.
A: If you want to set priority when launching a process you could use the built-in START command:
START ["title"] [/Dpath] [/I] [/MIN] [/MAX] [/SEPARATE | /SHARED]
[/LOW | /NORMAL | /HIGH | /REALTIME | /ABOVENORMAL | /BELOWNORMAL]
[/WAIT] [/B] [command/program] [parameters]
Use the low through belownormal options to set priority of the launched command/program. Seems like the most straightforward solution. No downloads or script writing. The other solutions probably work on already running procs though.
A: Maybe you want to consider using ProcessTamer that "automatize" the process of downgrading or upgrading process priority based in your settings.
I've been using it for two years. It's very simple but really effective!
A: from http://techtasks.com/code/viewbookcode/567
# This code sets the priority of a process
# ---------------------------------------------------------------
# Adapted from VBScript code contained in the book:
# "Windows Server Cookbook" by Robbie Allen
# ISBN: 0-596-00633-0
# ---------------------------------------------------------------
use Win32::OLE;
$Win32::OLE::Warn = 3;
use constant NORMAL => 32;
use constant IDLE => 64;
use constant HIGH_PRIORITY => 128;
use constant REALTIME => 256;
use constant BELOW_NORMAL => 16384;
use constant ABOVE_NORMAL => 32768;
# ------ SCRIPT CONFIGURATION ------
$strComputer = '.';
$intPID = 2880; # set this to the PID of the target process
$intPriority = ABOVE_NORMAL; # Set this to one of the constants above
# ------ END CONFIGURATION ---------
print "Process PID: $intPID\n";
$objWMIProcess = Win32::OLE->GetObject('winmgmts:\\\\' . $strComputer . '\\root\\cimv2:Win32_Process.Handle=\'' . $intPID . '\'');
print 'Process name: ' . $objWMIProcess->Name, "\n";
$intRC = $objWMIProcess->SetPriority($intPriority);
if ($intRC == 0) {
print "Successfully set priority.\n";
}
else {
print 'Could not set priority. Error code: ' . $intRC, "\n";
}
A: If you use PowerShell, you could write a script that let you change the priority of a process. I found the following PowerShell function on the Monad blog:
function set-ProcessPriority {
param($processName = $(throw "Enter process name"), $priority = "Normal")
get-process -processname $processname | foreach { $_.PriorityClass = $priority }
write-host "`"$($processName)`"'s priority is set to `"$($priority)`""
}
From the PowerShell prompt, you would do something line:
set-ProcessPriority SomeProcessName "High"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
} |
Q: SVN vs. Team Foundation Server A few months back my team switched our source control over to Apache Subversion from Visual SourceSafe, and we haven't been happier.
Recently I've been looking at Team Foundation Server, and at least on the surface, it seems very impressive. There is some great integration with Visual Studio, and lots of great tools for DBAs, testers, project managers, etc.
The most obvious difference between these two products is price. It's hard to beat Apache Subversion (free). Team Foundation Server is quite expensive, so the extra features would really have to kick Subversion in the pants.
*
*Does anyone have practical experience with both?
*How do they compare?
*Is Team Foundation Server actually worth the expense?
A: Here are the biggest differences between the two for me, and I've used both:
1) TFS is rather tightly coupled to the "Visual Studio way" of doing development. That's not to say that TFS is tightly coupled to the VS IDE, it means that TFS struggles to keep the familiar "check in"/"check out" paradigm of Visual SourceSafe, even when it really isn't an appropriate model anymore. Subversion's concept of "commit"/"update" is a lot more realistic when you have developers which might spend time disconnected from the network. TFS expects developers to always be connected to the server. That's a big minus. I personally find TFS to be less than transparent about how files are organized on the server and on your local disk because of the tight Visual Studio integration. Even TFS's bigger proponents concede that its connected check-in/check-out model is not a compelling option for developers who work disconnected. In a climate where people are starting to look at DVCS options like git and Mercurial over SVN, TFS's "check out" model seems a bit like a dinosaur.
2) Cost. Those who say that TFS isn't expensive are either probably very small shops, or are not in compliance with TFS's licensing terms. You need a Client Access License for darn near everything you do. Are you a manager who just manages the bugs? You need a ~$250 CAL (There are 5 included with a retail TFS License). A business user who just wants to report on their issues? A $250 CAL. A developer? $250 (Unless they have MSDN in which case it is included). The server? $500 (Included if you have MSDN). Of course, someone selling you a copy of TFS will tell you that work item tracking is free for additional users, but these additional users can only see the work items which they themselves create, and not the whole team's work items, which isn't too useful in an team-oriented, agile environment. All of this adds up when you have a mid-sized organization, and becomes tough to justify when so many best-of-breed products like SVN and CruiseControl.net's incremental cost is $0. (In fairness to TFS, though, I'm still waiting for a really good OSS issue tracker)
3) Project structure. In large teams with a smaller number of projects, TFS will probably work well. If you're a number of small, unconnected or loosely connected line-of-business apps in-house, TFS's structure can start to become overbearing. For one thing, it's not possible to define a taxonomy of projects themselves -- you can set up "Areas" within a project, but all issues and documents are tracked together within the basic context of a "project". Creating new "projects" is often time consuming, and is overkill for small efforts. Of course, SVN has nothing of the sort since it focuses only on source code control, but if you need good small-project flexibility, SVN and another issue tracking tool might be a better choice.
My opinion, for what it's worth:
*
*For large teams with big, well-budgeted projects, in a Microsoft shop where developers work almost exclusively within the IDE, TFS is the winner. TFS also wins when you need to centrally enforce policy with your projects.
*For a number of small teams, with many varied, smaller projects, or shops where cost is an issue, or teams who have developers who work disconnected from source control, go with SVN.
A: My recommendation, Team System isn't worth the money. I have used both and after using Team System, I tried to find a similar replacement. Basically what you are paying for is the integration and you could argue the customization support, but I have been able to create a Team System replacement with a little bit of time and integrating tools together.
I recently asked a question on what others have done to come up with a Team System alternative. I also list the development tools that I used to create the replacement. Hopefully with this answer and the question that I asked, you can find what works for you.
I am not a Team System hater, I just don't think it's worth the money. It is a very nice tool and if you don't mind paying the price for it, then by all means use it. It was the whole reason I created the replacement I came up with. I wanted the functionality Team System provided.
A: Here is a open source version of VisualSVN called Ankhsvn.
Its much better now that collabnet has taken it over.
A: If all you need is source control, TFS is overkill. One of my previous employers had TFS, VSS, and Subversion in their enterprise. We didn't have Active Directory or Exchange Server 2003 in our enterprise, so we ended up creating separate users on the TFS server so developers could use it. We had the same sorts of problems with merging that Ben Schierman mentioned, along with other buggy behavior that pushed us toward Subversion.
Whether TFS is the right call for you will depend in part on your budget, the size of your development team, and the amount of time and personnel available for configuration/maintenance of your solution. If you want the additional issue tracking, work item, and project statistics capabilities that TFS provides, it may be worth your while to look at other alternatives. Products like JIRA (from Atlassian Systems) or Trac integrate well with Subversion and provide the sort of oversight a project or program manager might at a lower price.
In an ideal environment, with Active Directory, Exchange Server 2003 or higher, and dedicated staff for the repository, TFS is more likely to be a good choice.
A: I have used both at work and at home. They are both very cool in their own right. The only time i would recommend using TFS though is if you will be using more of the features than just the source control. If all you need is source control you cant go wrong with SVN and this is why.
*
*VisualSVN Server That is a full SVN server with a nice plugin to manage it with. It lets you use windows authentication right through the UI. Easy.
*Tortoise Its tortoise, enough said.
*ankhsvn It is a great SCC plugin. For those that want full VS IDE integration the latest version is a full SCC plugin. So you now get full integration for free.
The above set up is 100% free and will get you through anything you need for source control.
A: I'm surprised that someone who has used Subversion in the past would even have a want/need for TFS source control.
My experience with TFS (2005) has been pretty horrible. I've read all kinds of whitepapers & guidance as to how to properly structure your source for various development needs.
Our simple situation, where we have a trunk with mainline development, and integration branch where we integrate changes & deploy from, and a releases branch to keep track of past releases is very common and straightforward, but we are continually running into problems.
My main issues with TFS:
*
*Merging is a PAIN in comparison to subversion.
*There are unfixed bugs. I ran into one about renaming/merging that has been known for 2 years and a fix will never be released for 2005. We ended up moving our branch to a "broken" folder and we ignore it now.
*Putting read-only locks on your files is friction. Who says I need to edit batch files and build scripts inside of TFS so that it will "check it out" for me? Subversion knows which files changed. There are no readonly locks there.
*Speed. TFS is dog-slow over a WAN, and it's really only usable if I VPN into my work computer, which makes my dev experience really slow overall.
*Lack of good command-line and explorer integration. IDE integration is really nice for the day-to-day Get-Latest, adding files, and checking in, but when you need to do things across many projects, it's nice to have good tools at your disposal. And before someone jumps down my throat claiming tf.exe works well... it's not really a cmd line tool. For example, checking in code shouldn't pop up a modal dialog.
...the list goes on. I think even with all of the integration, there are free alternatives that are far superior.
A: I joined an Open Source project over at CodePlex, recently. They use TFS for their source control and I have to say that it's absolutely magnificent. I'm incredibly impressed with it, so far. I'm a huge fan of the IDE integration and how easy it is to branch and tag your code. Adding a solution to source control is something like two clicks, if you've already got everything configured properly.
Now. Is it worth the hefty price tag? I don't think so. The benefit to working on projects at CodePlex is it lets me get the experience with TFS that I need, in the event that I have to use it somewhere later. If you want good IDE integration for your Source Control, go grab VisualSVN integration package. It's a much, much cheaper investment to get a lot of the same features (free on non-domain computers BTW).
A: We're a VS.NET shop, and we implemented:
*
*Bugzilla for issue tracking
*Apache Subversion as a source code repository back-end
*VisualSVN Server for managing SVN on the server
*TortoiseSVN (in Windows Explorer) and AhnkSVN or VisualSVN (in Visual Studio) on the client
*CruiseControl.NET for automated builds
Cost: $0
Benefits: Priceless
If you're a small team, or not ready to buy into the who TFS process, SVN and open source tools are the way to go.
A: TFS isn't just about Source Control. If you use the whole package that TFS offers, bug tracking, builds, reports, etc then TFS is a pretty solid choice (certainly better than Rational). TFS also integrates well with Active Directory.
Though if you are just talking about SCM, then I prefer SubVersion. I don't really like IDE integration. I also like SVN's convention of Trunk/Tags/Branches structure, and relative ease of switching between branches. Merging seemed easier in TFS though. Tortoise's UI beats TFS's hands down though, especially in regards to adding a file to a repo.
A: I'd say it really depends on your needs. TFS is very nice, I've used it extensively, but it's very much aimed at the enterprise level, if you don't need all of those features it might not be necessary. If you do need those features (especially branching, scalability, work item tracking, etc.) they are worth every penny. Keep in mind that TFS includes bug tracking, work item tracking and other features beyond source control. If you have multiple branches or if you find yourself struggling against some lack of feature or other in Subversion then it might be a good idea to switch. But barring a good reason to switch you should probably avoid the cost and productivity hit of switching source control systems.
A: Having used both extensively, I think Wedge was on the money in noting "TFS includes bug tracking, work item tracking and other features beyond source control".
However, I can honestly say that SVN and TFS seem pretty equal in regards to scalability, and if anything SVN's source control has the edge on TFS due to its inherent simplicity.
If you want work-item and bug tracking alongside your source control then you either go for TFS or you go with SVN and some other, possibly free, tools such as bugzilla. While TFS does integrate both source control and work-item tracking together I honestly think MS should have given it away free as an apology for abusing so many developers with VSS over the years.
A: I have used both SVN and TFS. Main advantage of using TFS is its tight integration with Visual Studio. Bug Tracking, Task Tracking will all go in one place. And the reports generated for these items will help the stake holders keep informed of the project status.
A: I am working on a project with 5 people and we recently switched from SVN to TFS. The entire process has been a nightmare. We have auto generated code from XMLSpy, and TFS does not recognize files modified outside of VS2008. The TFS Power Tools can scan your checkout and fix this problem but it is a pain to have to remember to use these tools. Another problem we constantly run into is the default merging tool in TFS. It is by far the worst merging tool I have ever used. One would think that TFS would be able to handle basic solution merges but so far that has not been the case.
The built in user interface is very useful, but it also has flaws. If I checkout from my solution explorer, sometimes files are that have been added are not checked out. If I do it from the Team Source Control window it works perfectly. Why is that? I look forward to TFS in VS2010 as I have heard great things about it, and SVN is far from perfect, but I would have expected some of these features to function a little more intuitively.
Adam
A: TFS is great for project management and tracking, however I feel the source control is not as good as SVN. Here are my beefs with TFS:
Check-in/Check-out model
This is a huge con for TFS source control. Unfortunately, VS automatically checks out items for you, even if you don't want to. I've been in a situation where someone checked out some files and then went on vacation. I was in charge of restructuring the directory structure, but was unable to because a bunch of files were checked out to that person. There is no way in the GUI to undo the checkout, which meant it had to be done one by one in the command line. Or I had to figure out how to write a power shell script for this.
VS is required to do everything
Sometimes I want to edit a text file and check it in. This requires me to startup VS 2010, which is a huge beast, just to edit a file and check it in. Something that took a few seconds with SVN now takes me a minute.
As some others pointed out, files are marked read only if they are not checked out. If you make it writable and edit it outside of VS, TFS won't recognize this. This makes editing something outside of VS annoying. This means, firing up VS, checking out the file, editing it another editor, checking in in VS.
Some operations that were easy in SVN are now a pain
*
*Maybe I haven't figured it out yet, but I found that rolling back a changeset was very tedious with TFS.
*Adding files to source control, which are not part of a solution, is a huge pain. The TFS source control explorer only shows which files are in source control, not which ones are not (maybe there's a setting somewhere for this, I don't know). With Tortoise SVN, I could simply press Commit on a folder and select which files to add.
A: As others have pointed out, TFS gives you a lot more features then SVN does in the form of project management and such. Having used both, and worked with very large companies in implementing TFS, here's my two cents.
1) If you are using TFS 2005, upgrade to TFS 2008. You'll thank me. There are a ton of improvements in TFS 2008 that make it workable.
2) If you live in Visual Studio and you want the IDE integration, go with TFS. I've used SVN integration and almost always drop back to using TortoiseSVN.
3) If you like the idea of accounts being integrated with Windows Authentication, go with TFS. The manageability from that end is nice. There may be hooks for SVN - I'm not positive, but if you like the GUI driven management, TFS is hard to beat.
4) If you need to track metrics or have easier ways of implementing things like check-in policies, go with TFS.
5) If you have people who won't implement it if it isn't MSFT, go with TFS.
6) If you do more than just .NET (Java work, Eclipse, etc) go with SVN. Yes there are very good products out there (like Teamprise) that work well with TFS. But unless the other languages are a small part of your shop, just stick with SVN.
Outside of that, the SCM features of both are about equivalent. They both do branching and merging, the both do atomic check-ins, they both support renames and moves. I think for people just getting started with the branching and merging concept, having the branches be visible in Source Control Explorer is nice.
TFS really isn't that expensive ($1200 maybe?). Compared to SVN it is, perhaps. The integration to reporting services and SharePoint is nice, but again, if you aren't using that, then it doesn't matter.
What I'd say is to download the 180-day trial of TFS and give it a go. Run a trial side-by-side. I think you'll be happy no matter which way you go.
A: I am currently leading the effort to evaluate TFS at my company against the Rational Suite which is what we currently use. So far TFS 2008 is pwning clearcase + clearquest. The dev environment integration is where it really shines.
A: My 10 cents:
TFS2005 was a joke - hard to install and even harder to maintain
TFS2008 was stable - easier to installer, simpler maintenance, and automated builds that work.
TFS2010 is EPIC! - installation is dog eassssyyy. Management is very easy; it's all a nicely laid out UI. Integrating it with VS2008 isn't so easy since you can't create projects in vs2008 you have to use vs2010 (which is stupid). TFS2010 also allows you to change the sharepoint project location instead of having those awful subfolders of TFS2008. TFS2010 also has tools like a burndown chart that is really useful for project management. It's like TFS2010 is for the whole production team including the clients! It still costs way too much though :(
A: Also take in mind that TFS requires a LOT of more horsepowers from the server hardware. And at minimum one windows server licenses ofcourse.
Best practice, as our company followed, is to use 2 servers: front-end (with integrated sharepoint), and a dedicated sql server in the back-end (we use an enterprise cluster). TFS can be installed on 1 machine, but should not.
In comparation, our svn server is installed on a virtual linux server with 256mb ram and 1 cpu, and is still several magnitudes faster when doing common tasks like checkout-all. The virtual hardware was the lowest vshpere could assign! Disk is fast though (SAN).
I whould suggest that TFS requires dedicated hardware for atleast $5000, while svn server (on linux) can run with any hardware which is obsolete for current windows based os.
A: As Ubiguchi points out TFS is not a version control product. Buying TFS with the intention of only using it for Version Control would clearly be a waste of money. TFS is an integrated suite of tools to automate all aspects of Application Lifecycle Management (and pretty much geared to "The Enterprise".
Also per Ben S's post - I don't understand your comment about locks. Locks aren't required in TFS at all. Administrators can configure TFS to work like VSS (features demanded by some "unwise" customers) to "Get-Latest on Checkout" which I believe also does a check-out lock.
But through "normal" use of TFS a "check-out" prompts a user for the lock type - and the default should be "none". A user CAN select a check-out (or a check-in lock) - but it is not required. If you don't want locks, don't use them.
TFS does track which users have check-outs on the server for various both performance reasons (make get-latest faster) and project management (I like to see what developers have files checked out and how long their check-outs are).
I'm not real familiar with SVN (I've never used it) - so I can't comment that "mergeing is worse with TFS" - and haven't hit the merge bug Ben S reported - but I've had great success with branching and merging using TFS.
One use case I know TFS is still pretty weak at is for users who are regularly "offline". TFS is a "Server Product" that assumes the users are connected the majority of the time. The offline experience improved in the 2008 release (it was dismal in 2005) but still has a long way to go. If you have developers who need (or want) to often be disconnected from the network for long periods of time - you are likely better off with SVN.
Another feature to consider for SVN fans who are using TFS is the SVN Bridge a codeplex which allows users to use TortiseSVN to connect to TFS. I good friend and colleague of mine uses it extensively and loves it.
Also the comment about a lack of command line surprises me - the command line tools are extensive (although many require a seperate download of TFS Power Tools
I suspect Ben's comments are based on an eval of the 2005 release which was clearly a "Microsoft V1.0" product. The product is currently in 2.1 with Version 3 coming in the near future.
A: TFS is heinous. At this point I version control locally using SVN (w/ Live Mesh for backups) because I have some many issues with TFS. The main problem is TFS uses time stamps to record if you have the latest version, and stores these time stamps on the server. You can delete your local copy, get latest from TFS, and it will say all files are up to date. It's a silly system that gives you no guarantee that you have the correct version of files. This results in numerous annoyances:
*
*TFS needs to be informed when any file is edited, so you need to be connected to the server at all times.
*TFS get confused if you edit files outside of the IDE. Further it sets all files to readonly in NTFS.
While TFS supports merging, it's really a checkin/checkout system. If you edit a file you will often find that it is locked to other developers. There are ways around it, but the system is so convoluted you will always run into the issue. For instance, our developers found that they can get around all files being set to readonly in NTFS by checking out an entire solution, which sets an exclusive lock on all files. I did this a few times because subversion has the same syntax for checkout, which does not acquire a lock.
Finally Team Explorer (the client) is a whopping 400 MB, TFS server requires SharePoint and two days to install. The subversion one click installer is roughly 30 MB and it will install the server for you in under a minute. TFS has many features, but its foundation is so shaky you will never use or care about them. TFS is expensive in terms of the license, and in the time developers will waste ranting on stackoverflow instead of writing code :P
A: In my opinion it depends on the situation and environment in which the project is done. If you have just a simple, small project, then SVN is great. As already some wrote, VisualSVN integrates nicely into Visual Studio s.t. you don't have to do the checkin/checkout over the native file system.
TFS is great for version control, but even better if you really use all of it's capabilities. In my eyes it really becomes worth if you use - for instance - the work items as your integrated repository for handling customer bug reports, new feature requests and for tracking the progress of your project by managing tasks and the according estimated time, used time and remaining time indications.
What is also really interesting is to use the feature of associating work items with source code checkins. See here for more infos about that.
A: We are a small team in the process of migrating from SVN to TFS2010. Our biggest reason to do so is the integration in Visual Studio and the WebAccess for bugtracking, that is now part of the TFS.
@Adam: Hopefully we will have a better experience. Can not tell yet...
A: I've used SVN in the past 3 years (coming from VSS earlier) and recently had to switch to TFS2010. The overall feeling is that it is buggier than SVN and except for the nice integration with the tasks/bugs I don't see it as having an edge against SVN. The speed seems to also be somewhat slower than with SVN.
If I were to choose a sourcecontrol now I would still go with SVN.
Regarding tools:
- AnkhSVN Visual Studio Plugin is as good as the TFS source control
- Tortoise is a lot better than the TFS counterpart
A: TFS is great, if you don't need non-developers, to get to pm stuff.
Our helpdesk needs to be involved in the process, and it just wasn't cutting it.
Also the build management in tfs 2005 at least, is attrotious, and it can't even build vs 2008 slns. I really don't like that my source control choice, affects my deployment choices, this is why my team is not an svn shop.
A: If it just based on source control, I'd go with SVN. The AnkhSVN free add-in for Visual Studio has been greatly improved in its new release. Also, you get the source code for SVN and the documentation is great! They changed some arcane things in TFS 2010 Source Control, and without the source code it can be very daunting to troubleshoot. Plus, you are dependent on the MSDN team for pumping out the doc's and they do it on their own schedule and at their own depth.
That being said, TFS obviously offers much more than source control. It is an ALM tool. Combining it with work items, reporting, automated builds, gated check-in, automated testing, etc. can provide some very rich value that you can only get with connecting disparate tools with SVN. And of course, having the source for SVN is not a failsafe. I've gotten into scenarios with SVN where it would have still taken weeks to totally figure out what was going on.
So, I recommend you look at it from an ALM perspective and see if your company is going to use all of the TFS features or is going to with a best-of-breed strategy (e.g. JIRA).
A: TFS by a mile.
I inadvertently cause too many problems for myself with SVNs file-based approach.
Source control problems ive experienced:
TFS – 0 problems over 2 years
SVN – lost count...
Yes I know the price of TFS factors it out for most companies which is such a shame. MS might have a lot more marketshare (and profit) if they had a reasonable pricing model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "77"
} |
Q: LinqDataSource - Can you limit the amount of records returned? I'd like to use a LinqDataSource control on a page and limit the amount of records returned. I know if I use code behind I could do something like this:
IEnumerable<int> values = Enumerable.Range(0, 10);
IEnumerable<int> take3 = values.Take(3);
Does anyone know if something like this is possible with a LinqDataSource control?
[Update]
I'm going to use the LinqDataSource with the ListView control, not a GridView or Repeater. The LinqDataSource wizard does not provide the ability to limit the number of records return. The Advanced options only allow you to enabled deletes, inserts, and updates.
A: protected void DocsData_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
e.Arguments.MaximumRows = 5;
}
A: You could base your Linq query on a stored proc that only returns x number of rows using a TOP statement. Remember just because you can do all your DB code in Linq doesn't mean you should. Plus, you can tell Linq to use the same return type for the stored proc as the normal table, so all your binding will still work, and the return results will be the same type
A: You can put event Selecting of LinqDataSource:
protected void ldsLastEntries_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
e.Arguments.MaximumRows = 10;
}
A: I had this same issue. The way I got round this was to use the Selecting event on the LinqDataSource and return the result manually.
e.g.
protected void lnqRecentOrder_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
DataClassesDataContext dx = new DataClassesDataContext();
e.Result = (from o in dx.Orders
where o.CustomerID == Int32.Parse(Request.QueryString["CustomerID"])
select o).Take(5);
}
A: I know that if you use a paging repeater or gridview with the linqdatasource it will automatically optimize the number of results returned, but I'm also pretty sure in the datasource wizard you can go to advanced options and limit it to
SELECT TOP 3 FROM
which should allow you to do what you need
A: Yes and No.
No, you cannot limit the results within the LinqDataSource control. Because Linq uses deferred execution, the expectation is that the presentation control will do the recordset limits.
Yes, you can do this with a ListView control. The trick is to use the DataPager control within the LayoutTemplate, like so:
<LayoutTemplate>
<div id="itemPlaceholder" runat="server" />
<asp:DataPager ID="DataPager1" runat="server" PageSize="3">
</asp:DataPager>
</LayoutTemplate>
Normally, you would include controls inside the DataPager like first, last, next, and previous. But if you just make it empty, then you will only see the three results that you desire.
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: Territory Map Generation Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)?
I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:
.
These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object.
Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach.
Any advice would be much appreciated.
A: The best reference I've seen on them is Computational Geometry: Algorithms and Applications, which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures.
They talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book Computational Geometry in C, which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first.
Sorry to have only books to recommend! The only decent online resource I've seen on them are the two Wikipedia articles, which doesn't really tell you implementation details. This link may be helpful though.
A: Why not use a map of primitives (triangles, squares), distribute the starting points for the countries (the "capitals"), and then randomly expanding the countries by adding a random adjacent primitive to the country.
A: CGAL is a C++ library that has data structures and algorithms used in Computational Geometry.
A: I'm actually dealing with exactly this kind of stuff for my company's video game. The most useful info I've found are at these two links:
Paul Bourke's page at UWA, with his 1989 paper on Delaunay and a series of implementation links.
A great explanation of the psudocode and a visual of doing Delaunay at codeGuru.com.
In terms of rendering these - most of the implementations I've found will need massaging to get what you'd want, but since using this for a game map would lead to a number of points plus lines between them, it could be a very simple matter to do draw this out to screen.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Accessing a Dictionary.Keys Key through a numeric index I'm using a Dictionary<string, int> where the int is a count of the key.
Now, I need to access the last-inserted Key inside the Dictionary, but I do not know the name of it. The obvious attempt:
int LastCount = mydict[mydict.keys[mydict.keys.Count]];
does not work, because Dictionary.Keys does not implement a []-indexer.
I just wonder if there is any similar class? I thought about using a Stack, but that only stores a string. I could now create my own struct and then use a Stack<MyStruct>, but I wonder if there is another alternative, essentially a Dictionary that implements an []-indexer on the Keys?
A: Why don't you just extend the dictionary class to add in a last key inserted property. Something like the following maybe?
public class ExtendedDictionary : Dictionary<string, int>
{
private int lastKeyInserted = -1;
public int LastKeyInserted
{
get { return lastKeyInserted; }
set { lastKeyInserted = value; }
}
public void AddNew(string s, int i)
{
lastKeyInserted = i;
base.Add(s, i);
}
}
A: You could always do this:
string[] temp = new string[mydict.count];
mydict.Keys.CopyTo(temp, 0)
int LastCount = mydict[temp[mydict.count - 1]]
But I wouldn't recommend it. There's no guarantee that the last inserted key will be at the end of the array. The ordering for Keys on MSDN is unspecified, and subject to change. In my very brief test, it does seem to be in order of insertion, but you'd be better off building in proper bookkeeping like a stack--as you suggest (though I don't see the need of a struct based on your other statements)--or single variable cache if you just need to know the latest key.
A: You can use an OrderedDictionary.
Represents a collection of key/value
pairs that are accessible by the key
or index.
A: I think you can do something like this, the syntax might be wrong, havent used C# in a while
To get the last item
Dictionary<string, int>.KeyCollection keys = mydict.keys;
string lastKey = keys.Last();
or use Max instead of Last to get the max value, I dont know which one fits your code better.
A: I agree with the second part of Patrick's answer. Even if in some tests it seems to keep insertion order, the documentation (and normal behavior for dictionaries and hashes) explicitly states the ordering is unspecified.
You're just asking for trouble depending on the ordering of the keys. Add your own bookkeeping (as Patrick said, just a single variable for the last added key) to be sure. Also, don't be tempted by all the methods such as Last and Max on the dictionary as those are probably in relation to the key comparator (I'm not sure about that).
A: In case you decide to use dangerous code that is subject to breakage, this extension function will fetch a key from a Dictionary<K,V> according to its internal indexing (which for Mono and .NET currently appears to be in the same order as you get by enumerating the Keys property).
It is much preferable to use Linq: dict.Keys.ElementAt(i), but that function will iterate O(N); the following is O(1) but with a reflection performance penalty.
using System;
using System.Collections.Generic;
using System.Reflection;
public static class Extensions
{
public static TKey KeyByIndex<TKey,TValue>(this Dictionary<TKey, TValue> dict, int idx)
{
Type type = typeof(Dictionary<TKey, TValue>);
FieldInfo info = type.GetField("entries", BindingFlags.NonPublic | BindingFlags.Instance);
if (info != null)
{
// .NET
Object element = ((Array)info.GetValue(dict)).GetValue(idx);
return (TKey)element.GetType().GetField("key", BindingFlags.Public | BindingFlags.Instance).GetValue(element);
}
// Mono:
info = type.GetField("keySlots", BindingFlags.NonPublic | BindingFlags.Instance);
return (TKey)((Array)info.GetValue(dict)).GetValue(idx);
}
};
A: One alternative would be a KeyedCollection if the key is embedded in the value.
Just create a basic implementation in a sealed class to use.
So to replace Dictionary<string, int> (which isn't a very good example as there isn't a clear key for a int).
private sealed class IntDictionary : KeyedCollection<string, int>
{
protected override string GetKeyForItem(int item)
{
// The example works better when the value contains the key. It falls down a bit for a dictionary of ints.
return item.ToString();
}
}
KeyedCollection<string, int> intCollection = new ClassThatContainsSealedImplementation.IntDictionary();
intCollection.Add(7);
int valueByIndex = intCollection[0];
A: The way you worded the question leads me to believe that the int in the Dictionary contains the item's "position" on the Dictionary. Judging from the assertion that the keys aren't stored in the order that they're added, if this is correct, that would mean that keys.Count (or .Count - 1, if you're using zero-based) should still always be the number of the last-entered key?
If that's correct, is there any reason you can't instead use Dictionary<int, string> so that you can use mydict[ mydict.Keys.Count ]?
A: As @Falanwe points out in a comment, doing something like this is incorrect:
int LastCount = mydict.Keys.ElementAt(mydict.Count -1);
You should not depend on the order of keys in a Dictionary. If you need ordering, you should use an OrderedDictionary, as suggested in this answer. The other answers on this page are interesting as well.
A: I don't know if this would work because I'm pretty sure that the keys aren't stored in the order they are added, but you could cast the KeysCollection to a List and then get the last key in the list... but it would be worth having a look.
The only other thing I can think of is to store the keys in a lookup list and add the keys to the list before you add them to the dictionary... it's not pretty tho.
A: To expand on Daniels post and his comments regarding the key, since the key is embedded within the value anyway, you could resort to using a KeyValuePair<TKey, TValue> as the value. The main reasoning for this is that, in general, the Key isn't necessarily directly derivable from the value.
Then it'd look like this:
public sealed class CustomDictionary<TKey, TValue>
: KeyedCollection<TKey, KeyValuePair<TKey, TValue>>
{
protected override TKey GetKeyForItem(KeyValuePair<TKey, TValue> item)
{
return item.Key;
}
}
To use this as in the previous example, you'd do:
CustomDictionary<string, int> custDict = new CustomDictionary<string, int>();
custDict.Add(new KeyValuePair<string, int>("key", 7));
int valueByIndex = custDict[0].Value;
int valueByKey = custDict["key"].Value;
string keyByIndex = custDict[0].Key;
A: You can also use SortedList and its Generic counterpart. These two classes and in Andrew Peters answer mentioned OrderedDictionary are dictionary classes in which items can be accessed by index (position) as well as by key. How to use these classes you can find: SortedList Class , SortedList Generic Class .
A: A dictionary may not be very intuitive for using index for reference but, you can have similar operations with an array of KeyValuePair:
ex.
KeyValuePair<string, string>[] filters;
A: A Dictionary is a Hash Table, so you have no idea the order of insertion!
If you want to know the last inserted key I would suggest extending the Dictionary to include a LastKeyInserted value.
E.g.:
public MyDictionary<K, T> : IDictionary<K, T>
{
private IDictionary<K, T> _InnerDictionary;
public K LastInsertedKey { get; set; }
public MyDictionary()
{
_InnerDictionary = new Dictionary<K, T>();
}
#region Implementation of IDictionary
public void Add(KeyValuePair<K, T> item)
{
_InnerDictionary.Add(item);
LastInsertedKey = item.Key;
}
public void Add(K key, T value)
{
_InnerDictionary.Add(key, value);
LastInsertedKey = key;
}
.... rest of IDictionary methods
#endregion
}
You will run into problems however when you use .Remove() so to overcome this you will have to keep an ordered list of the keys inserted.
A: Visual Studio's UserVoice gives a link to generic OrderedDictionary implementation by dotmore.
But if you only need to get key/value pairs by index and don't need to get values by keys, you may use one simple trick. Declare some generic class (I called it ListArray) as follows:
class ListArray<T> : List<T[]> { }
You may also declare it with constructors:
class ListArray<T> : List<T[]>
{
public ListArray() : base() { }
public ListArray(int capacity) : base(capacity) { }
}
For example, you read some key/value pairs from a file and just want to store them in the order they were read so to get them later by index:
ListArray<string> settingsRead = new ListArray<string>();
using (var sr = new StreamReader(myFile))
{
string line;
while ((line = sr.ReadLine()) != null)
{
string[] keyValueStrings = line.Split(separator);
for (int i = 0; i < keyValueStrings.Length; i++)
keyValueStrings[i] = keyValueStrings[i].Trim();
settingsRead.Add(keyValueStrings);
}
}
// Later you get your key/value strings simply by index
string[] myKeyValueStrings = settingsRead[index];
As you may have noticed, you can have not necessarily just pairs of key/value in your ListArray. The item arrays may be of any length, like in jagged array.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "167"
} |
Q: The Difference Between a DataGrid and a GridView in ASP.NET? I've been doing ASP.NET development for a little while now, and I've used both the GridView and the DataGrid controls before for various things, but I never could find a really good reason to use one or the other. I'd like to know:
What is the difference between these 2 ASP.NET controls? What are the advantages or disadvantages of both? Is one any faster? Newer? Easier to maintain?
The intellisense summary for the controls doesn't seem to describe any difference between the two. They both can view, edit, and sort data and automatically generate columns at runtime.
Edit: Visual Studio 2008 no longer lists DataGrid as an available control in the toolbox. It is still available (for legacy support I assume) if you type it in by hand though.
A: The DataGrid was originally in .NET 1.0. The GridView was introduced (and replaced the DataGrid) in .NET 2.0. They provide nearly identical functionality.
A: If you're working in Visual Studio 2008 / .NET 3.5, you probably shouldn't use either. Use the ListView - it gives you the features of the GridView combined with the styling flexibility of a repeater.
A: DataGrid was an ASP.NET 1.1 control, still supported. GridView arrived in 2.0, made certain tasks simpler added different databinding features:
This link has a comparison of DataGrid and GridView features -
https://msdn.microsoft.com/en-us/library/05yye6k9(v=vs.100).aspx
A: The GridView control is the successor to the DataGrid control. Like the DataGrid control, the GridView control was designed to display data in an HTML table. When bound to a data source, the DataGrid and GridView controls each display a row from a DataSource as a row in an output table.
Both the DataGrid and GridView controls are derived from the WebControl class. Although it has a similar object model to that of the DataGrid control, the GridView control also has a number of new features and advantages over the DataGrid control, which include:
*
*Richer design-time capabilities.
*Improved data source binding capabilities.
*Automatic handling of sorting, paging, updates, and deletes.
*Additional column types and design-time column operations.
*A Customized pager user interface (UI) with the PagerTemplate property.
Differences between the GridView control and the DataGrid control include:
*
*Different custom-paging support.
*Different event models.
Sorting, paging, and in-place editing of data requires additional coding when using the DataGrid control. The GridView control enables you to add sorting, paging, and editing capabilities without writing any code. Instead, you can automate these tasks, along with other common tasks such as data binding to a data source, by setting properties on the control.
A: The key difference is in the ViewState management IIRC. The DataGrid requires ViewState turned on in order to have edit and sort capabilities.
A: One key difference security wise is that DataGrid uses BoundColumn which does not HtmlEncode the bound data. There is no property to turn HtmlEncoding on or off either, so you need to do it in code somehow.
GridView uses BoundField, which does HtmlEncode by default on the bound data and it has a HtmlEncode property if you need to turn it off.
A: DataGrid
*
*DataGrid was introduced with Asp.Net 1.0.
*For sorting we need to handle SortCommand event and rebind grid
required and for paging we need to handle the PageIndexChanged event
and rebind grid required.
*Need to write code for implementing Update and Delete operations.
*Does not supports auto format or style features.
*Performance is fast as compared to GridView.
GridView
*
*GridView was introduced with Asp.Net 2.0.
*Built-in supports for Paging and Sorting.
*Built-in supports for Update and Delete operations.
*Supports auto format or style features.
*Performance is slow as compared to DataGrid.
The events and properties like Item has changed as Row.
For example,
*
*ItemCommand - RowCommand
*ItemDataBound - RowDataBound
*e.Item.ItemType - e.Row.RowType
A: some basic diffrence between gridview and details view
the GridView control also has a number of new features and advantages over the DataGrid control, which include:
· Richer design-time capabilities.
· Improved data source binding capabilities.
· Automatic handling of sorting, paging, updates, and deletes.
· Additional column types and design-time column operations.
· A Customized pager user interface (UI) with the PagerTemplate property.
Differences between the GridView control and the DataGrid control include:
· Different custom-paging support.
· Different event models.
A: One of the differences is the HTML output. A datagrid will output TD's for the header and a gridview will output TH's. This can cause unintuitive changes in the display.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: What to use for Messaging with C# So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at ActiveMQ but don't know how well C# will hook with it. Wanting to hear what all of you guys think.
edit : It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it.
A: ActiveMQ works well with C# using the Spring.NET integrations and NMS. A post with some links to get you started in that direction is here. Also consider using MSMQ (The System.Messaging namespace) or a .NET based asynchronous messaging solution, with some options here.
A: MSMQ (Microsoft Message Queueing) may be a great choice. It is part of the OS and present as an optional component (can be installed via Add/Remove Programs / Windows Components), meaning it's free (as long you already paid for Windows, of course). MSMQ provides Win32/COM and System.Messaging APIs. More modern Windows Communication Foundation (aka Indigo) queued channels also use MSMQ.
Note that MSMQ is not supported on Home SKUs of Windows (XP Home and Vista Home)
A: Its worth mentioning that the ActiveMQ open source project defines a C# API for messaging called NMS which allows you to develop against a single C# / .Net API that can then use various messaging back ends such as
*
*ActiveMQ
*MSMQ
*TibCo's EMS
*any STOMP provider
*any JMS provider via StompConnect
A: You may want to look at MSMQ. It can be used by .NET and VFP, but you'll need to rewrite to use them. Here's an article that tells you how to use MSMQ from VFP. https://learn.microsoft.com/en-us/previous-versions/visualstudio/foxpro/ms917361(v=msdn.10)
A: Sorry if this isn't what you are asking for...
Have you considered some sort of cache behind the scenes that acts a bit like the "bucket system" when using asynchronous sockets in c/c++ using winsock? Basicly, it works by accepting requests, and sends an immediate response back to the web app, and when it finally gets around to finding your record, it updates it on the app via AJAX or any other technology of your choice. Since I'm not a C# programmer I can't provide any specific example. Hope this helps!
A: Does the Fox app use .CDX indexes? If so, you might be able to improve performance by adding indexes without needing to change any program code. If it uses .IDX indexes, though, the change would have to be done in the actual app.
A: As the problem is with writes, I would look more towards >removing< any unneeded indexes on the tables. As is common in RDBMS, every index on a FoxPro table slows down a write operation as the indexes need to be updated, and as you aren't reading directly from (or presumably directly querying) the table you shouldn't need very many indexes. You might also want to look at any triggers or field rules on the tables as they may be slowing down the write operation. Be sure your referential integrity is still preserved, though..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Why doesn't Java autoboxing extend to method invocations of methods of the autoboxed types? I want to convert a primitive to a string, and I tried:
myInt.toString();
This fails with the error:
int cannot be dereferenced
Now, I get that primitives are not reference types (ie, not an Object) and so cannot have methods. However, Java 5 introduced autoboxing and unboxing (a la C#... which I never liked in C#, but that's beside the point). So with autoboxing, I would expect the above to convert myInt to an Integer and then call toString() on that.
Furthermore, I believe C# allows such a call, unless I remember incorrectly. Is this just an unfortunate shortcoming of Java's autoboxing/unboxing specification, or is there a good reason for this?
A:
seems like a shortcoming of the
specification to me
There are more shortcomings and this is a subtle topic. Check this out:
public class methodOverloading{
public static void hello(Integer x){
System.out.println("Integer");
}
public static void hello(long x){
System.out.println("long");
}
public static void main(String[] args){
int i = 5;
hello(i);
}
}
Here "long" would be printed (haven't checked it myself), because the compiler choses widening over autoboxing. Be careful when using autoboxing or don't use it at all!
A: The valid syntax closest to your example is
((Integer) myInt).toString();
When the compiler finishes, that's equivalent to
Integer.valueOf(myInt).toString();
However, this doesn't perform as well as the conventional usage, String.valueOf(myInt), because, except in special cases, it creates a new Integer instance, then immediately throws it away, resulting in more unnecessary garbage. (A small range of integers are cached, and access by an array access.) Perhaps language designers wanted to discourage this usage for performance reasons.
Edit: I'd appreciate it if the downvoter(s) would comment about why this is not helpful.
A: Java autoboxing/unboxing doesn't go to the extent to allow you to dereference a primitive, so your compiler prevents it. Your compiler still knows myInt as a primitive. There's a paper about this issue at jcp.org.
Autoboxing is mainly useful during assignment or parameter passing -- allowing you to pass a primitive as an object (or vice versa), or assign a primitive to an object (or vice versa).
So unfortunately, you would have to do it like this: (kudos Patrick, I switched to your way)
Integer.toString(myInt);
A: Ditto on what Justin said, but you should do this instead:
Integer.toString(myInt);
It saves an allocation or two and is more readable.
A: As everyone has pointed out, autoboxing lets you simplify some code, but you cannot pretend that primitives are complex types.
Also interesting: "autoboxing is a compiler-level hack" in Java. Autoboxing is basically a strange kludge added onto Java. Check out this post for more details about how strange it is.
A: It would be helpful if Java defined certain static methods to operate on primitive types and built into the compiler some syntactic sugar so that
5.asInteger
would be equivalent to
some.magic.stuff.Integer.asInteger(5);
I don't think such a feature would cause incompatibility with any code that compiles under the current rules, and it would help reduce syntactic clutter in many cases. If Java were to autobox primitives that were dereferenced, people might assume that it was mapping the dereferencing syntax to static method calls (which is effectively what happens in .NET), and thus that operations written in that form were no more costly than would be the equivalent static method invocations. Adding a new language feature that would encourage people to write bad code (e.g. auto-boxing dereferenced primitives) doesn't seem like a good idea, though allowing dereferencing-style methods might be.
A: One other way to do it is to use:
String.valueOf(myInt);
This method is overloaded for every primitive type and Object. This way you don't even have to think about the type you're using. Implementations of the method will call the appropriate method of the given type for you, e.g. Integer.toString(myInt).
See http://java.sun.com/javase/6/docs/api/java/lang/String.html.
A: In C#, integers are neither reference types nor do they have to be boxed in order for ToString() to be called. They are considered objects in the Framework (as a ValueType, so they have value semantics), however. In the CLR, methods on primitives are called by "indirectly" loading them onto the stack (ldind).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: What is best practice for FTP from a SQL Server 2005 stored procedure? What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this:
EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1'
The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues.
A: If you're running SQL 2005 you could do this in a CLR integration assembly and use the FTP classes in the System.Net namespace to build a simple FTP client.
You'd benefit from being able to trap and handle exceptions and reduce the security risk of having to use xp_cmdshell.
Just some thoughts.
A: Another possibility is to use DTS or Integration Services (DTS for SQL Server 7 or 2000, SSIS for 2005 or higher). Both are from Microsoft, included in the Sql Server installation (in Standard edition at least) and have an FTP task and are designed for import/export jobs from Sql Server.
A: If you need to do FTP from within the database, then I would go with a .NET assembly as Kevin suggested. That would provide the most control over the process, plus you would be able to log meaningful error messages to a table for reporting.
Another option would be to write a command line app that read the database for commands to run. You could then define a scheduled task to call that command line app every minutes or whatever the polling period needed to be. That would be more secure than enabling CLR support in the database server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Programmatic SMS What is the best way to programmatically send an SMS text message?
Are there any free Web Service based SMS gateways?
I know that if I happen to know the user's carrier (Sprint, AT&T, etc), I can send an SMS by emailing an address based on phone number + carrier. I would like a good solution that does not require me to know the carrier. Barring that, is there an easy way to lookup carrier given a cell phone number?
I understand that there are some services independent of the major mobile carriers that offer a per-message fee-based service, with API. What I would like to know is how such organizations tap into the SMS networks. Do they have contracts with mobile carriers to resell their services?
A: Where I work we've been using http://www.clickatell.com for sending out SMS - it looks like its about 6 or 7 cents a message. They just take http POST requests to send out a message. I don't know if you'll be able to find any good free gateways. We used to send out emails, but found they were unreliable.
A: I've used clickatell in the past and found them very good also.
However, You could build your own to get messages VERY cheap. All you need is: a contract which gives loads of (or unlimited) messages; windows mobile phone; and a bit of socket programming.
Write a web service (pass the number and the message) which makes a call to a program on the mobile which sends the message.
I know of at least FTSE100 company which went this route.
A: We got fed up with using 'free' sms gateways, very unreliable.
Now we use an sms gateway device called OutboxSMS from Felltech Ltd. It sits on our network and hooks directly into out mobile phone provider using a wireless link from it's built-in transmitter. We needed to buy a SIM card (we got a PAYG with a huge bundle of messages), which is fitted to the OutboxSMS unit. We configured an email account for it on our mail server (MS Exchange), and configured the SMTP/POP3 account on the box.
We use OPManager, this sends alerts by email, which we direct to outboxsms, it parses the message and sends a text message to our ops guys phones when something goes wrong.
We also have some shell scripts which use sendmail to send an email to outboxsms, which again is converted to text messages.
A: I think this one deserves a new answer. There's a new player in town, it's called Nexmo and features highly competitive prices, even compared to Twilio.
https://www.nexmo.com/
A: I have been doing that with a nokia phone, connected to a linux machine. I have a cron job and a script that would check a database table for new messages and use gnokii to send messages. It works great if the number of sms you are goig to send isn't to big.
A: You could also get a GSM transmitter and issue AT commands that send sms's. Don't know why you would want to do it this way, but it's another option. This way you won't depend on someone else service
A: Use http://www.twilio.com/
They have a REST interface to send SMS's and even to establish phone calls or receive phone calls.
You even get 30$ credits to try it out.
Def. the cheapest solution you will find.
A: I don't know of any free SMS services, you usually buy bulk sms'seses and use an API to send them out.
Whitepages.com has an API that will allow developers to reverse lookup a phone number. It reports the carrier on mobile number, however a lot of the time it's some non-existent-anymore carrier like Powertel or something.
A: Supporting Angus, I can vouch for http://www.clickatell.com. It was used at a company I used to work at. It was a very easy solution to setup and use and worked great. You just need to anticipate how many messages you intend to send out and bulk order messages. They're pretty cheap, overall.
A: I have used TextMagic. They have reasonable rates and a great API and account management.
A: Sorry, after re-reading your question i realized this is not the answer your looking for. However this is what i did for my command line program. There's a website where if you put in the telephone number it gives you the carrier. So when i entered my number it screen scraped the website, got the carrier and if the carrier is in my list, i retrieved the email of that carrier
Most companies offer a SMS-to-email kinda thing. For example myphonenumber@verizon.com or something (there's a whole list on wikipedia).
I used that to create my self a little command line application in c# that sends out text messages. However, you don't really get a "reply" and the number is a pre-assigned one from the company.
I think if you want to go the free route, this is your best bet.
Here's the wikipedia link:
SMS gateway
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: How Do You Determine The PID of the Parent of a Process I have a process in erlang that is supposed to do something immediately after spawn, then send the result back to the parent when it is finished. How do I figure out the PID of the process that spawned it?
A: @Eridius' answer is the preferred way to do it. Requiring a process to register a name may have unintended side-effects such as increasing the visibility of the process not to mention the hassle of coming up with unique names when you have lots of processes.
A: The best way is definitely to pass it as an argument to the function called to start the child process. If you are spawning funs, which generally is a Good Thing to do, be careful of doing:
spawn_link(fun () -> child(self()) end)
which will NOT do as you intended. (Hint: when is self() called)
Generally you should avoid registering a process, i.e. giving it a global name, unless you really want it to be globally known. Spawning a fun means that you don't have to export the spawned function as you should generally avoid exporting functions that aren't meant to be called from other modules.
A: You should pass self() to the child as one of the arguments to the entry function.
spawn_link(?MODULE, child, [self()]).
A: You can use the BIF register to give the spawning / parent process a name (an atom) then refer back to the registered name from other processes.
FUNC() ->
%% Do something
%% Then send message to parent
parent ! MESSAGE.
...
register(parent, self()),
spawn(MODULE, FUNC, [ARGS]).
See Getting Started With Erlang §3.3 and The Erlang Reference Manual §10.3.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Why should I practice Test Driven Development and how should I start? Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice?
A: There are a lot of benefits:
*
*You get immediate feedback on if your code is working, so you can find bugs faster
*By seeing the test go from red to green, you know that you have both a working regression test, and working code
*You gain confidence to refactor existing code, which means you can clean up code without worrying what it might break
*At the end you have a suite of regression tests that can be run during automated builds to give you greater confidence that your codebase is solid
The best way to start is to just start. There is a great book by Kent Beck all about Test Driven Development. Just start with new code, don't worry about old code... whenever you feel you need to refactor some code, write a test for the existing functionality, then refactor it and make sure the tests stay green. Also, read this great article.
A: The benefits part has recently been covered, as for where to start....on a small enterprisey system where there aren't too many unknowns so the risks are low. If you don't already know a testing framework (like NUnit), start by learning that. Otherwise start by writing your first test :)
A: Benefits
*
*You figure out how to compartmentalize your code
*You figure out exactly what you want your code to do
*You know how it supposed to act and, down the road, if refactoring breaks anything
*Gets you in the habit of making sure your code always knows what it is supposed to do
Getting Started
Just do it. Write a test case for what you want to do, and then write code that should pass the test. If you pass your test, great, you can move on to writing cases where your code will always fail (2+2 should not equal 5, for example).
Once all of your tests pass, write your actual business logic to do whatever you want to do.
If you are starting from scratch make sure you find a good testing suite that is easy to use. I like PHP so PHPUnit or SimpleTest work well. Almost all of the popular languages have some xUnit testing suite available to help build and automate testing.
A: In my opinion, the single greatest thing is that it clearly allows you to see if your code does what it is supposed to. This may seem obvious, but it is super easy to run astray of your original goals, as I have found out in the past :p
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: Grid Hosting for Windows Are there any good grid-hosting companies out there that offer .NET stacks? Something like MediaTemple - which won't host the worlds fastest websites, but for the price is far better than "shared hosting". I've used Rackspace's Mosso, but it sucked - it never felt like a normal .NET stack (caching was odd, site recompilation was odd).
A: Try gogrid.com they seem to have a very nice following in the cloud computing circles.
A: I've heard good things about Mosso .
http://www.mosso.com/
A: can talk a bit more about the not so normal .net experience? I was thinking to go for Mosso...
A: Grid Hosting one price , an entire cloud hosting server
link text
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: What is the best way to create a sparse array in C++? I am working on a project that requires the manipulation of enormous matrices, specifically pyramidal summation for a copula calculation.
In short, I need to keep track of a relatively small number of values (usually a value of 1, and in rare cases more than 1) in a sea of zeros in the matrix (multidimensional array).
A sparse array allows the user to store a small number of values, and assume all undefined records to be a preset value. Since it is not physically possibly to store all values in memory, I need to store only the few non-zero elements. This could be several million entries.
Speed is a huge priority, and I would also like to dynamically choose the number of variables in the class at runtime.
I currently work on a system that uses a binary search tree (b-tree) to store entries. Does anyone know of a better system?
A: Eigen is a C++ linear algebra library that has an implementation of a sparse matrix. It even supports matrix operations and solvers (LU factorization etc) that are optimized for sparse matrices.
A: Complete list of solutions can be found in the wikipedia. For convenience, I have quoted relevant sections as follows.
https://en.wikipedia.org/wiki/Sparse_matrix#Dictionary_of_keys_.28DOK.29
Dictionary of keys (DOK)
DOK consists of a dictionary that maps (row, column)-pairs to the
value of the elements. Elements that are missing from the dictionary
are taken to be zero. The format is good for incrementally
constructing a sparse matrix in random order, but poor for iterating
over non-zero values in lexicographical order. One typically
constructs a matrix in this format and then converts to another more
efficient format for processing.[1]
List of lists (LIL)
LIL stores one list per row, with each entry containing the column
index and the value. Typically, these entries are kept sorted by
column index for faster lookup. This is another format good for
incremental matrix construction.[2]
Coordinate list (COO)
COO stores a list of (row, column, value) tuples. Ideally, the entries
are sorted (by row index, then column index) to improve random access
times. This is another format which is good for incremental matrix
construction.[3]
Compressed sparse row (CSR, CRS or Yale format)
The compressed sparse row (CSR) or compressed row storage (CRS) format
represents a matrix M by three (one-dimensional) arrays, that
respectively contain nonzero values, the extents of rows, and column
indices. It is similar to COO, but compresses the row indices, hence
the name. This format allows fast row access and matrix-vector
multiplications (Mx).
A: Small detail in the index comparison. You need to do a lexicographical compare, otherwise:
a= (1, 2, 1); b= (2, 1, 2);
(a<b) == (b<a) is true, but b!=a
Edit: So the comparison should probably be:
return lhs.x<rhs.x
? true
: lhs.x==rhs.x
? lhs.y<rhs.y
? true
: lhs.y==rhs.y
? lhs.z<rhs.z
: false
: false
A: For C++, a map works well. Several million objects won't be a problem. 10 million items took about 4.4 seconds and about 57 meg on my computer.
My test application is as follows:
#include <stdio.h>
#include <stdlib.h>
#include <map>
class triple {
public:
int x;
int y;
int z;
bool operator<(const triple &other) const {
if (x < other.x) return true;
if (other.x < x) return false;
if (y < other.y) return true;
if (other.y < y) return false;
return z < other.z;
}
};
int main(int, char**)
{
std::map<triple,int> data;
triple point;
int i;
for (i = 0; i < 10000000; ++i) {
point.x = rand();
point.y = rand();
point.z = rand();
//printf("%d %d %d %d\n", i, point.x, point.y, point.z);
data[point] = i;
}
return 0;
}
Now to dynamically choose the number of variables, the easiest solution is to represent index as a string, and then use string as a key for the map. For instance, an item located at [23][55] can be represented via "23,55" string. We can also extend this solution for higher dimensions; such as for three dimensions an arbitrary index will look like "34,45,56". A simple implementation of this technique is as follows:
std::map data<string,int> data;
char ix[100];
sprintf(ix, "%d,%d", x, y); // 2 vars
data[ix] = i;
sprintf(ix, "%d,%d,%d", x, y, z); // 3 vars
data[ix] = i;
A: Hash tables have a fast insertion and look up. You could write a simple hash function since you know you'd be dealing with only integer pairs as the keys.
A: The accepted answer recommends using strings to represent multi-dimensional indices.
However, constructing strings is needlessly wasteful for this. If the size isn’t known at compile time (and thus std::tuple doesn’t work), std::vector works well as an index, both with hash maps and ordered trees. For std::map, this is almost trivial:
#include <vector>
#include <map>
using index_type = std::vector<int>;
template <typename T>
using sparse_array = std::map<index_type, T>;
For std::unordered_map (or similar hash table-based dictionaries) it’s slightly more work, since std::vector does not specialise std::hash:
#include <vector>
#include <unordered_map>
#include <numeric>
using index_type = std::vector<int>;
struct index_hash {
std::size_t operator()(index_type const& i) const noexcept {
// Like boost::hash_combine; there might be some caveats, see
// <https://stackoverflow.com/a/50978188/1968>
auto const hash_combine = [](auto seed, auto x) {
return std::hash<int>()(x) + 0x9e3779b9 + (seed << 6) + (seed >> 2);
};
return std::accumulate(i.begin() + 1, i.end(), i[0], hash_combine);
}
};
template <typename T>
using sparse_array = std::unordered_map<index_type, T, index_hash>;
Either way, the usage is the same:
int main() {
using i = index_type;
auto x = sparse_array<int>();
x[i{1, 2, 3}] = 42;
x[i{4, 3, 2}] = 23;
std::cout << x[i{1, 2, 3}] + x[i{4, 3, 2}] << '\n'; // 65
}
A: Boost has a templated implementation of BLAS called uBLAS that contains a sparse matrix.
https://www.boost.org/doc/libs/release/libs/numeric/ublas/doc/index.htm
A: The best way to implement sparse matrices is to not to implement them - atleast not on your own. I would suggest to BLAS (which I think is a part of LAPACK) which can handle really huge matrices.
A: Since only values with [a][b][c]...[w][x][y][z] are of consequence, we only store the indice themselves, not the value 1 which is just about everywhere - always the same + no way to hash it. Noting that the curse of dimensionality is present, suggest go with some established tool NIST or Boost, at least read the sources for that to circumvent needless blunder.
If the work needs to capture the temporal dependence distributions and parametric tendencies of unknown data sets, then a Map or B-Tree with uni-valued root is probably not practical. We can store only the indice themselves, hashed if ordering ( sensibility for presentation ) can subordinate to reduction of time domain at run-time, for all 1 values. Since non-zero values other than one are few, an obvious candidate for those is whatever data-structure you can find readily and understand. If the data set is truly vast-universe sized I suggest some sort of sliding window that manages file / disk / persistent-io yourself, moving portions of the data into scope as need be. ( writing code that you can understand ) If you are under commitment to provide actual solution to a working group, failure to do so leaves you at the mercy of consumer grade operating systems that have the sole goal of taking your lunch away from you.
A: Here is a relatively simple implementation that should provide a reasonable fast lookup (using a hash table) as well as fast iteration over non-zero elements in a row/column.
// Copyright 2014 Leo Osvald
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef UTIL_IMMUTABLE_SPARSE_MATRIX_HPP_
#define UTIL_IMMUTABLE_SPARSE_MATRIX_HPP_
#include <algorithm>
#include <limits>
#include <map>
#include <type_traits>
#include <unordered_map>
#include <utility>
#include <vector>
// A simple time-efficient implementation of an immutable sparse matrix
// Provides efficient iteration of non-zero elements by rows/cols,
// e.g. to iterate over a range [row_from, row_to) x [col_from, col_to):
// for (int row = row_from; row < row_to; ++row) {
// for (auto col_range = sm.nonzero_col_range(row, col_from, col_to);
// col_range.first != col_range.second; ++col_range.first) {
// int col = *col_range.first;
// // use sm(row, col)
// ...
// }
template<typename T = double, class Coord = int>
class SparseMatrix {
struct PointHasher;
typedef std::map< Coord, std::vector<Coord> > NonZeroList;
typedef std::pair<Coord, Coord> Point;
public:
typedef T ValueType;
typedef Coord CoordType;
typedef typename NonZeroList::mapped_type::const_iterator CoordIter;
typedef std::pair<CoordIter, CoordIter> CoordIterRange;
SparseMatrix() = default;
// Reads a matrix stored in MatrixMarket-like format, i.e.:
// <num_rows> <num_cols> <num_entries>
// <row_1> <col_1> <val_1>
// ...
// Note: the header (lines starting with '%' are ignored).
template<class InputStream, size_t max_line_length = 1024>
void Init(InputStream& is) {
rows_.clear(), cols_.clear();
values_.clear();
// skip the header (lines beginning with '%', if any)
decltype(is.tellg()) offset = 0;
for (char buf[max_line_length + 1];
is.getline(buf, sizeof(buf)) && buf[0] == '%'; )
offset = is.tellg();
is.seekg(offset);
size_t n;
is >> row_count_ >> col_count_ >> n;
values_.reserve(n);
while (n--) {
Coord row, col;
typename std::remove_cv<T>::type val;
is >> row >> col >> val;
values_[Point(--row, --col)] = val;
rows_[col].push_back(row);
cols_[row].push_back(col);
}
SortAndShrink(rows_);
SortAndShrink(cols_);
}
const T& operator()(const Coord& row, const Coord& col) const {
static const T kZero = T();
auto it = values_.find(Point(row, col));
if (it != values_.end())
return it->second;
return kZero;
}
CoordIterRange
nonzero_col_range(Coord row, Coord col_from, Coord col_to) const {
CoordIterRange r;
GetRange(cols_, row, col_from, col_to, &r);
return r;
}
CoordIterRange
nonzero_row_range(Coord col, Coord row_from, Coord row_to) const {
CoordIterRange r;
GetRange(rows_, col, row_from, row_to, &r);
return r;
}
Coord row_count() const { return row_count_; }
Coord col_count() const { return col_count_; }
size_t nonzero_count() const { return values_.size(); }
size_t element_count() const { return size_t(row_count_) * col_count_; }
private:
typedef std::unordered_map<Point,
typename std::remove_cv<T>::type,
PointHasher> ValueMap;
struct PointHasher {
size_t operator()(const Point& p) const {
return p.first << (std::numeric_limits<Coord>::digits >> 1) ^ p.second;
}
};
static void SortAndShrink(NonZeroList& list) {
for (auto& it : list) {
auto& indices = it.second;
indices.shrink_to_fit();
std::sort(indices.begin(), indices.end());
}
// insert a sentinel vector to handle the case of all zeroes
if (list.empty())
list.emplace(Coord(), std::vector<Coord>(Coord()));
}
static void GetRange(const NonZeroList& list, Coord i, Coord from, Coord to,
CoordIterRange* r) {
auto lr = list.equal_range(i);
if (lr.first == lr.second) {
r->first = r->second = list.begin()->second.end();
return;
}
auto begin = lr.first->second.begin(), end = lr.first->second.end();
r->first = lower_bound(begin, end, from);
r->second = lower_bound(r->first, end, to);
}
ValueMap values_;
NonZeroList rows_, cols_;
Coord row_count_, col_count_;
};
#endif /* UTIL_IMMUTABLE_SPARSE_MATRIX_HPP_ */
For simplicity, it's immutable, but you can can make it mutable; be sure to change std::vector to std::set if you want a reasonable efficient "insertions" (changing a zero to a non-zero).
A: I would suggest doing something like:
typedef std::tuple<int, int, int> coord_t;
typedef boost::hash<coord_t> coord_hash_t;
typedef std::unordered_map<coord_hash_t, int, c_hash_t> sparse_array_t;
sparse_array_t the_data;
the_data[ { x, y, z } ] = 1; /* list-initialization is cool */
for( const auto& element : the_data ) {
int xx, yy, zz, val;
std::tie( std::tie( xx, yy, zz ), val ) = element;
/* ... */
}
To help keep your data sparse, you might want to write a subclass of unorderd_map, whose iterators automatically skip over (and erase) any items with a value of 0.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: What is Object Mocking and when do I need it? Many people use Mock Objects when they are writing unit tests. What is a Mock Object? Why would I ever need one? Do I need a Mock Object Framework?
A:
Do I need a Mock Object Framework?
Certainly not. Sometimes, writing mocks by hand can be quite tedious. But for simple things, it's not bad at all. Applying the principle of Last Responsible Moment to mocking frameworks, you should only switch from hand-written mocks to a framework when you've proven to yourself that hand-writing mocks is more trouble than it's worth.
If you're just getting starting with mocking, jumping straight into a framework is going to at least double your learning curve (can you double a curve?). Mocking frameworks will make much more sense when you've spent a few projects writing mocks by hand.
A: Object Mocking is a way to create a "virtual" or mocked object from an interface, abstract class, or class with virtual methods. It allows you to sort of wrap one of these in your own definition for testing purposes. It is useful for making an object that is relied on for a certain code block your are testing.
A popular one that I like to use is called Moq, but there are many others like RhinoMock and numerous ones that I don't know about.
A: It allows you to test how one part of your project interacts with the rest, without building the entire thing and potentially missing a vital part.
EDIT: Great example from wikipedia: It allows you to test out code beforehand, like a car designer uses a crash test dummy to test the behavior of a car during an accident.
A: Object Mocking is used to keep dependencies out of your unit test.
Sometimes you'll have a test like "SelectPerson" which will select a person from the database and return a Person object.
To do this, you would normally need a dependency on the database, however with object mocking you can simulate the interaction with the database with a mock framework, so it might return a dataset which looks like one returned from the database and you can then test your code to ensure that it handles translating a dataset to a person object, rather than using it to test that a connection to the database exists.
A: Another use is it will let you test against other parts of your system that aren't built yet. For example, if your class depends on some other class that is part of a feature that someone else is working on, you can just ask for a mostly complete interface, program to the interface and just mock the details as you expect them to work. Then, make sure your assumptions about the interface were correct (either while you are developing, or once the feature is complete).
A: Several people have already answered the 'what', but here are a couple of quick 'whys' that I can think of:
*
*Performance
Because unit tests should be fast, testing a component that
interacts with a network, a database, or other time-intensive
resource does not need to pay the penalty if it's done using mock
objects. The savings add up quickly.
*Collaboration
If you are writing a nicely encapsulated piece of
code that needs to interact with someone else's code (that hasn't
been written yet, or is in being developed in parallel - a common
scenario), you can exercise your code with mock objects once an
interface has been agreed upon. Otherwise your code may not begin to
be tested until the other component is finished.
A: A mock object lets you test against just what you are writing, and abstract details such as accessing a resource (disk, a network service, etc). The mock then lets you pretend to be that external resource, or class or whatever.
You don't really need a mock object framework, just extend the class of the functionality you don't want to worry about in your test and make sure the class you are testing can use your mock instead of the real thing (pass it in via a constructor or setter or something.
Practice will show when mocks are helpful and when they aren't.
EDIT: Mocking resources is especially important so you don't have to rely on them to exist during the test, and you can mock the details of how they exist and what they respond (such as simulating a FileNotFoundException, or a webservice that is missing, or various possible return values of a webservice)... all without the slow access times involved (mocking will prove MUCH faster than accessing such resources in the test).
A: Whether or not you a mocking framework is useful depends in part on the language of the code you're writing. With a static language, you need to put in extra effort in order to trick the compiler into accepting your mock objects as a replacement for the real thing. In a dynamically-typed language such as Python, Ruby or Javascript, you can generally just attach the methods onto arbitrary object or class and pass that as the parameter -- so a framework would add much less value.
A: 2 recommended mocking frameworks for .net Unit testing are Typemock Isolator and Rhino Mock.
In the following link you can see an explanation from Typemock as to why you need a mocking framework for Unit Testing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: ASP.Net 2.0 Application without Business Logic Layer? Is it "acceptable" to have an ASP.Net 2.0 application without the BLL (Business Logic Layer) as the following?
*
*SQL Server Data Storage & Stored Procedures
*Data Link Layer (Strongly Typed Table Adapters) connecting to Stored Procs
*Presentation Layer ASPX Pages with Code behind and ObjectDataSource for connection straight to the DLL
Is a BLL always preferable, even if business logic is entirely validatable in the presentation's code behind? What are the potential drawbacks for not using a BLL?
A: It's acceptable as long as you understand the consequences. The main reason you'd have a BLL is to re-use that logic elsewhere throughout your application.
If you have all that validation logic in the presentation code, you're really making it difficult to re-use elsewhere within your application.
A: Like everything else it is environmental and depends on the use of the system. The question you need to ask your self is:
*
*Will this be actively developed
*Is this going to be used over the course of many years and expanded on
*Is the expansion of the application unknown and thus infinite
Really it comes down to laziness. How much time to do you want to spend reworking the system from the UI? Because having no business layer means duplication of rules in your UI across possibility many many pages.
Then again if this is a proof of concept or short demo or class project. Take the easy way out.
A: Acceptable? Depends who you ask and what your requirements are. Is this app an internal one-off used by you and a few other people? Maybe this is good enough. If it's meant to be a production ready enterprise application that will grow and be maintained over the years, then you probably want to invest more effort up-front to build a maintainable app.
Separation of Concerns is a key design technique for building maintainable apps. By mixing presentation, business, and data access logic all together, you can end up with a very fragile difficult to change application architecture.
A: It depends. If your business logic is in your click events and page loads, it is NOT acceptable.
It appears that your business logic is somewhere within the DAL (e.g., stored procedures and such), just as long as you are consistent, it's fine. As long as you are very, very sure that your clients will always be using SQL Server then this approach is not a problem.
I know a colleague who has all his business logic in stored procedures that his views are mostly thin clients to database backends: he has been immensely successful with the product that he sells. But that's only because he's very consistent with it.
A: If the application is a general one, then the business logic layer can be used in complete other applications too. Like, I normally use my CMS related BLL classes in other applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: High availability Is there anyway to configure a WCF service with a failover endpoint if the primary endpoint dies? Kind of like being able to specify a failover server in a SQL cluster.
Specifically I am using the TCP/IP binding for speed, but on the rare occurrence that the machine is not available I would like to redirect traffic to the failover server. Not too bothered about losing messages. I'd just prefer not to write the code to handle re-routing.
A: Without trying to sound too vague but I think Windows Network Load Balancing (NLB) should handle this for you.
A: You need to use a layer 4 load balancer in front of the two endpoints. Prob best to stick with a dedicated piece of hardware.
A: Haven't done it yet with WCF but plan to have a local DNS entry pointing to our Network Load Balancing (NLB) virtual iP address which will direct all traffic to one of our servers hosting services within IIS. I have used NLB for this exact scenario in the past for web sites and see no reason why it will not work well with WCF.
The beauty of it is that you can take servers in and out of the virtual cluster at will and NLB takes care of all the ugly re-directing to an available node. It also comes with a great price tag: $FREE with your Windows Server license.
A: We've had good luck with BigIP as a solution, though it's not cheap or easy to set up.
One nice feature is it allows you to set up your SSL certificate (and backdoor to the CA) at the load balancer's common endpoint. Then you can use protocols to transfer the requests back to the WCF servers so the entire transmission is encrypted.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Where is a good place to start programming GUIs for windows? I have experience writing console and network client/server applications in C and C++, but I know next to nothing about using the win32 visual API, MFC, Qt, wxWidgets, etc. Where is a good place to start, and what method should I specialize in, so as to be future ready and robust?
A: I don't know if I'd call it a good place to start, but if you want to be future ready, take a look at the windows presentation foundation which is built entirely for the purpose of programming GUI's for windows
A: This is a rather broad question, as programming GUI applications in Windows can be done in so many ways.
There are two main parts to developing any GUI app: the language and the API/framework. Considering you're interested in learning to build Windows GUI apps, the language isn't really a point of focus for you. Hence, you should pick a language you already know and work with a framework or API that can be harnessed by your chosen language.
If you want to use C you're pretty much restricted to dealing with the Win32 API yourself, in which case reading Petzold or Richter would be great places to start. The Win32 API can be quite daunting, but it's well worth the effort to learn (imho). There are plenty of tutorials on Win32 on the web, and there's always MSDN, with a complete reference/guide to the Win32 API. Make sure you cover not just the API, but other areas such as resources/dialogs as they are building blocks for your Win32 application.
If you want to use C++ you have all of the options that you have when using C plus a few others. I'd recommend going with the Win32 API directly, and then moving on to a known framework such as MFC, Qt, wxWindows or GTK so that you can spend less time working with boilerplate code and instead focus on writing your application logic. The last 3 options I just listed have the added benefit of being cross-platform, so you don't have to worry too much about platform-specific issues. Given that you said you want to work with Windows, I'll assume you're keen to focus on that rather than cross-platform -- so go with MFC, but spend some time with the Win32 API first to get familiar with some of the concepts.
When dealing with MFC and the Win32 API, it's a good idea to try and get a solid understanding of the terminology prior to writing code. For example, you need to understand what the message pump is, and how it works. You need to know about concepts such as "owner-drawn controls", and subclassing. When you understand these things (and more), you'll find it easier to work with MFC because it uses similar terminology in its class interfaces (eg. you need to know what "translate messages" means before you can understand how and when to use PreTranslateMessage).
You could also use Managed C++ to write .NET GUI applications, but I've read in a few places that Managed C++ wasn't really intended to be used in this manner. Instead it should be used as a gateway between native/unmanaged code and managed code. If you're using .NET it's best to use a .NET language such as VB.NET or C# to build your GUIs.
So if you are going to use .NET, you currently have the choice of the WinForms library, or WPF. I personally feel that you'd be wasting time learning to build WinForms applications given that WPF is designed to replace it. Over time WPF will become more prevelant and Winforms will most likely die off. WPF has a much richer API set, and doesn't suffer from many of the limitations that Winforms does. If you do choose this route, however, you'll no doubt have to learn XAML, which is a markup language that drives WPF applications. This technology is coming of age, and there are many great places to learn about it. First, there are sites such as LearnWPF, and DrWPF which have some really great articles. Secondly, there are plenty of quality books on the topic.
So, to sum up, once you've picked your language and tech, the path is actually quite easy. Just pick up a book or two, read some blogs, get into some code samples.. and most importantly ... write code. Keep writing, keep making mistakes, and keep learning from them.
As a final note...
In other words, Silverlight. If you don't want to go the MS route you might give Adobe's Flash/Flex a look see. Both Silverlight and Flash/Flex build RIA's. Which I think is where we are headed. They days of Office like apps are numbered
I don't agree at all. Silverlight is not the same as WPF. Silverlight is web-specific, and only has a subset of WPF's features. Given that the question asks for Windows GUI apps, Flash/Flex Rich Internet Apps are not really a fitting suggestion. I also don't agree that the days of Rich Client Applications (such as office) are numbered at all.
I hope that helps. Good luck :)
A: My first experience writing simple GUI applications for Windows was with C# and Visual Studio. The GUI-building interface is a simple drag and drop deal that generates skeleton methods based on potential user actions. I only did fairly basic programming with this, but I imagine it would be an excellent place to start to learn the basics and extend into the more advanced capabilities as you go.
A: There are plenty of online Win32 tutorials:
http://www.zeusedit.com/forum/viewtopic.php?t=1218
There are plenty of compilers to choose from:
http://www.zeusedit.com/forum/viewtopic.php?t=238
I would also recommend getting the Borland Win32 SDK documentation in WinHelp file format:
http://www.zeusedit.com/forum/viewtopic.php?t=7
It only covers the bare basics of the Win32, but when starting, this can be helpful as it is less daunting and less bloated than the MSDN.
A: I'd never go down the Silverlight, Flash/Flex or any similar route. It does look nice, but the main problem is that the code of the engine that runs it is completely closed-box and controlled by a single company. Take, for example 64bit versions of both of those. If some new platform emerges, you won't be able to migrate your existing code to it.
A: For business apps, Windows Forms is very mature. It provides a gentle path from auto-generating a lot for you into allowing fine-grained control and rolling your own. There are tons of high-quality third party controls and a large body of examples, docs, etc out there. It's hard to run into a problem that someone else hasn't solved. I highly recommend acquiring some background Win32 knowledge (e.g. Petzold) as the WinForms framework lives on top of it.
I have no WPF experience, but from the sample apps I've seen it looks like a good choice for apps whose interfaces would benefit from more graphical metaphors. So if you're doing a banking app, probably not worth the extra design overhead. But if you're doing, say, a warehouse management app it could be improved by dropping pretty boxes into pretty bins.
@StephenCox: wrong answer to the wrong question. OP is asking about desktop client apps, and moreover, WPF != Silverlight.
A: For a simple starting point to get your head around the "event-driven" nature basically all frameworks are created around look at FLTK.
Here are some quick starting videos Link
For professional use I'd recommend Qt, expensive but often worth it in commercial situations.
A: Since you are already familiar with C and C++ I would recommend learning how to write a simple Windows GUI app using Charles Petzold's book. It will give you the fundamental understanding of how Windows works. It's good to understand that most everything that you see is a window (a button is a window for example) and that these windows respond to messages. I wouldnt' spend a lot of time on this though and you don't necessarily need to do this first if you are going to chose WPF. I just think it's good to have a basic understanding of this.
There was a good podcast recently on .Net Rocks called "Kate Gregory Develops in C++ for Vista!" on there she recommends that someone starting out now should not use/learn MFC (even though it has been recently updated).
As far as getting ready for the future you need to learn WPF, but it isn't complete yet, so depending on the kinds of client side apps you want to create, you will probably need to learn WinForms. The majority of people aren't using WPF yet, so it's a good time to start learning. I think you will find it easier using C# to learn it instead of doing managed code with C++.
A: Get your basics right first. Best tutorial I've found is: http://winprog.org/tutorial/start.html
After that, although the homepage is hatefully distasteful, the tutorial pages are good in content and aesthetics: http://www.tenouk.com/cplusmfcdotnet.html
Then of course there's MSDN.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: What is the best way to do unit testing for ASP.NET 2.0 web pages? Any suggestions? Using visual studio in C#.
Are there any specific tools to use or methods to approach this?
Update:
Sorry, I should have been a little more specific. I am using ASP.Net 2.0 and was looking more for a tool like jUnit for Java. I took a look at NUnit and NUnitAsp and that looks very promising. And I didn't even know that Visual Studio Pro has a testing suite, so I'll look at all of these options (I've just started using Visual Studio/Asp.net/C# this summer).
A: These frameworks are useful for integration testing, but they can't provide unit testing, that is, testing the View isolated from persistence, business logic, whatever.
For unit testing Asp.Net Webforms, as well as MVC, you can use Ivonna. For example, you can mock your database access and verify that the mocked records are displayed in the datagrid. Or you can mock the membership provider and test the logged in scenario without having to navigate to the login page and entering your credentials, as with integration testing.
A: There was a screencast series a year or so ago on Polymorphic Podcast that did a pretty good intro walkthrough of an MVP implementation in ASP.NET. Implemented this way, unit tests fall into place much more naturally.
http://polymorphicpodcast.com/shows/mv-patterns/
A: Boy, that's a pretty general question. I'll do my best, but be prepared to see me miss by a mile.
Assumptions
*
*You are using ASP.NET, not plain ASP
*You don't really want to test your web pages, but the logic behind them. Unit testing the actual .ASPX pages is rather painful, but there are frameworks out there to do it. NUnitAsp is one.
The first thing to do is to organize (or plan) your code so that it can be tested. The two most popular design patterns for this at the time seem to be MVP and MVC. Both separate the logic of the application away from the view so that you can test the logic without the view (web pages) getting in your way.
Either MVP or MVC will be effective. MVC has the advantage of having a Microsoft framework almost ready to go.
Once you've selected a framework pattern that encourages testability, you need to use a unit testing tool. NUnit is a good starting point. Visual Studio Professional has a testing suite built it, but NUnit + TestDrive.NET also works in the IDE.
That's sort of a shotgun blast of information. I hope some if it hits. The Pragmatic Bookshelf has a good book covering the topic.
A: WatiN is the best that I've found. It integrates into Visual Studio unit testing or nunit & you can do pretty much anything you need in the browser (click links, submit forms, look for text/images, etc.), plus it's written in .net so you don't need to have ruby installed (as you do for watir, which is an awesome tool none the less)
A: NUnit is the way to go for you.
You could find some links here :
*
*http://nunitasp.sourceforge.net/tutorial/
*NUnit Unit Testing of ASP.NET Pages, Base Classes, Controls and other widgetry using Cassini by Scott Hanselman
A: Take a look at http://selenium.openqa.org/ it offers a good automated way to build unit tests hooking into the browser. there is a nice firefox plugin for recording tests and can utilize almost any unit testing framework. We had a presentation/demo at our local user group meeting last month and it looked awesome.
A: Your best bet is separating the model logic from presentation and thoroughly unit testing the model with NUnit or similar. Testing the users interaction with the web page can be fiddly.
If you actually do want to unit test the users interaction with the web page some of the afformentioned tools such as waitn seem good, an addition to that which I've heard of is Selenium
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
} |
Q: How to include PHP files that require an absolute path? I have a directory structure like the following;
script.php
inc/include1.php
inc/include2.php
objects/object1.php
objects/object2.php
soap/soap.php
Now, I use those objects in both script.php and /soap/soap.php, I could move them, but I want the directory structure like that for a specific reason. When executing script.php the include path is inc/include.php and when executing /soap/soap.php it's ../inc, absolute paths work, /mnt/webdev/[project name]/inc/include1.php... But it's an ugly solution if I ever want to move the directory to a different location.
So is there a way to use relative paths, or a way to programmatically generate the "/mnt/webdev/[project name]/"?
A: have a look at http://au.php.net/reserved.variables
I think the variable you are looking for is: $_SERVER["DOCUMENT_ROOT"]
A: Another way to handle this that removes any need for includes at all is to use the autoload feature. Including everything your script needs "Just in Case" can impede performance. If your includes are all class or interface definitions, and you want to load them only when needed, you can overload the __autoload() function with your own code to find the appropriate class file and load it only when it's called. Here is the example from the manual:
function __autoload($class_name) {
require_once $class_name . '.php';
}
$obj = new MyClass1();
$obj2 = new MyClass2();
As long as you set your include_path variables accordingly, you never need to include a class file again.
A: You could define a constant with the path to the root directory of your project, and then put that at the beginning of the path.
A: You can use relative paths. Try __FILE__. This is a PHP constant which always returns the path/filename of the script it is in. So, in soap.php, you could do:
include dirname(__FILE__).'/../inc/include.php';
The full path and filename of the
file. If used inside an include, the
name of the included file is returned.
Since PHP 4.0.2, __FILE__ always
contains an absolute path with
symlinks resolved whereas in older
versions it contained relative path
under some circumstances.
(source)
Another solution would be to set an include path in your httpd.conf or an .htaccess file.
A: Another option, related to Kevin's, is use __FILE__, but instead replace the php file name from within it:
<?php
$docRoot = str_replace($_SERVER['SCRIPT_NAME'], '', __FILE__);
require_once($docRoot . '/lib/include.php');
?>
I've been using this for a while. The only problem is sometimes you don't have $_SERVER['SCRIPT_NAME'], but sometimes there is another variable similar.
A: I found this to work very well!
function findRoot() {
return(substr($_SERVER["SCRIPT_FILENAME"], 0, (stripos($_SERVER["SCRIPT_FILENAME"], $_SERVER["SCRIPT_NAME"])+1)));
}
Use:
<?php
function findRoot() {
return(substr($_SERVER["SCRIPT_FILENAME"], 0, (stripos($_SERVER["SCRIPT_FILENAME"], $_SERVER["SCRIPT_NAME"])+1)));
}
include(findRoot() . 'Post.php');
$posts = getPosts(findRoot() . 'posts_content');
include(findRoot() . 'includes/head.php');
for ($i=(sizeof($posts)-1); 0 <= $i; $i--) {
$posts[$i]->displayArticle();
}
include(findRoot() . 'includes/footer.php');
?>
A: This should work
$root = realpath($_SERVER["DOCUMENT_ROOT"]);
include "$root/inc/include1.php";
Edit: added imporvement by aussieviking
A: I think the best way is to put your includes in your PHP include path. There are various ways to do this depending on your setup.
Then you can simply refer to
require_once 'inc1.php';
from inside any file regardless of where it is whether in your includes or in your web accessible files, or any level of nested subdirectories.
This allows you to have your include files outside the web server root, which is a best practice.
e.g.
site directory
html (web root)
your web-accessible files
includes
your include files
Also, check out __autoload for lazy loading of class files
http://www.google.com/search?q=setting+php+include+path
http://www.google.com/search?q=__autoload
A: require(str_repeat('../',(substr_count(getenv('SCRIPT_URL'),'/')-1))."/path/to/file.php");
I use this line of code. It goes back to the "top" of the site tree, then goes to the file desired.
For example, let's say i have this file tree:
domain.com/aaa/index.php
domain.com/bbb/ccc/ddd/index.php
domain.com/_resources/functions.php
I can include the functions.php file from wherever i am, just by copy pasting
require(str_repeat('../',(substr_count(getenv('SCRIPT_URL'),'/')-1))."/_resources/functions.php");
If you need to use this code many times, you may create a function that returns the str_repeat('../',(substr_count(getenv('SCRIPT_URL'),'/')-1)) part. Then just insert this function in the first file you include. I have an "initialize.php" file that i include at the very top of each php page and which contains this function. The next time i have to include files, i in fact just use the function (named path_back):
require(path_back()."/_resources/another_php_file.php");
A:
@Flubba, does this allow me to have folders inside my include directory? flat include directories give me nightmares. as the whole objects directory should be in the inc directory.
Oh yes, absolutely. So for example, we use a single layer of subfolders, generally:
require_once('library/string.class.php')
You need to be careful with relying on the include path too much in really high traffic sites, because php has to hunt through the current directory and then all the directories on the include path in order to see if your file is there and this can slow things up if you're getting hammered.
So for example if you're doing MVC, you'd put the path to your application directoy in the include path and then specify refer to things in the form
'model/user.class'
'controllers/front.php'
or whatever.
But generally speaking, it just lets you work with really short paths in your PHP that will work from anywhere and it's a lot easier to read than all that realpath document root malarkey.
The benefit of those script-based alternatives others have suggested is they work anywhere, even on shared boxes; setting the include path requires a little more thought and effort but as I mentioned lets you start using __autoload which just the coolest.
A: If you are going to include specific path in most of the files in your application, create a Global variable to your root folder.
define("APPLICATION_PATH", realpath(dirname(__FILE__) . '/../app'));
or
define("APPLICATION_PATH", realpath(DIR(__FILE__) . '/../app'));
Now this Global variable "APPLICATION_PATH" can be used to include all the files instead of calling realpath() everytime you include a new file.
EX:
include(APPLICATION_PATH ."/config/config.ini";
Hope it helps ;-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "99"
} |
Q: How do I retrieve my MySQL username and password? I lost my MySQL username and password. How do I retrieve it?
A: Do it without down time
Run following command in the Terminal to connect to the DBMS (you need root access):
sudo mysql -u root -p;
run update password of the target user (for my example username is mousavi and it's password must be 123456):
UPDATE mysql.user SET authentication_string=PASSWORD('123456') WHERE user='mousavi';
at this point you need to do a flush to apply changes:
FLUSH PRIVILEGES;
Done! You did it without any stop or restart mysql service.
A: While you can't directly recover a MySQL password without bruteforcing, there might be another way - if you've used MySQL Workbench to connect to the database, and have saved the credentials to the "vault", you're golden.
On Windows, the credentials are stored in %APPDATA%\MySQL\Workbench\workbench_user_data.dat - encrypted with CryptProtectData (without any additional entropy). Decrypting is easy peasy:
std::vector<unsigned char> decrypt(BYTE *input, size_t length) {
DATA_BLOB inblob { length, input };
DATA_BLOB outblob;
if (!CryptUnprotectData(&inblob, NULL, NULL, NULL, NULL, CRYPTPROTECT_UI_FORBIDDEN, &outblob)) {
throw std::runtime_error("Couldn't decrypt");
}
std::vector<unsigned char> output(length);
memcpy(&output[0], outblob.pbData, outblob.cbData);
return output;
}
Or you can check out this DonationCoder thread for source + executable of a quick-and-dirty implementation.
A: Login MySql from windows cmd using existing user:
mysql -u username -p
Enter password:****
Then run the following command:
mysql> SELECT * FROM mysql.user;
After that copy encrypted md5 password for corresponding user and there are several online password decrypted application available in web. Using this decrypt password and use this for login in next time.
or update user password using flowing command:
mysql> UPDATE mysql.user SET Password=PASSWORD('[password]') WHERE User='[username]';
Then login using the new password and user.
A: If you have root access to the server where mysql is running you should stop the mysql server using this command
sudo service mysql stop
Now start mysql using this command
sudo /usr/sbin/mysqld --skip-grant-tables --skip-networking &
Now you can login to mysql using
sudo mysql
FLUSH PRIVILEGES;
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPass');
Full instructions can be found here http://www.techmatterz.com/recover-mysql-root-password/
A: Unfortunately your user password is irretrievable. It has been hashed with a one way hash which if you don't know is irreversible. I recommend go with Xenph Yan above and just create an new one.
You can also use the following procedure from the manual for resetting the password for any MySQL root accounts on Windows:
*
*Log on to your system as Administrator.
*Stop the MySQL server if it is running. For a server that is running as a Windows service, go to
the Services manager:
Start Menu -> Control Panel -> Administrative Tools -> Services
Then find the MySQL service in the list, and stop it. If your server is
not running as a service, you may need to use the Task Manager to force it to stop.
*Create a text file and place the following statements in it. Replace the password with the password that you want to use.
UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root';
FLUSH PRIVILEGES;
The UPDATE and FLUSH statements each must be written on a single line. The UPDATE statement resets the password for all existing root accounts, and the FLUSH statement tells the server to reload the grant tables into memory.
*Save the file. For this example, the file will be named C:\mysql-init.txt.
*Open a console window to get to the command prompt:
Start Menu -> Run -> cmd
*Start the MySQL server with the special --init-file option:
C:\> C:\mysql\bin\mysqld-nt --init-file = C:\mysql-init.txt
If you installed MySQL to a location other than C:\mysql, adjust the command accordingly.
The server executes the contents of the file named by the --init-file option at startup, changing each root account password.
You can also add the --console option to the command if you want server output to appear in the console window rather than in a log file.
If you installed MySQL using the MySQL Installation Wizard, you may need to specify a --defaults-file option:
C:\> "C:\Program Files\MySQL\MySQL Server 5.0\bin\mysqld-nt.exe" --defaults-file="C:\Program Files\MySQL\MySQL Server 5.0\my.ini" --init-file=C:\mysql-init.txt
The appropriate --defaults-file setting can be found using the Services Manager:
Start Menu -> Control Panel -> Administrative Tools -> Services
Find the MySQL service in the list, right-click on it, and choose the Properties option. The Path to executable field contains the --defaults-file setting.
*After the server has started successfully, delete C:\mysql-init.txt.
*Stop the MySQL server, then restart it in normal mode again. If you run the server as a service, start it from the Windows Services window. If you start the server manually, use whatever command you normally use.
You should now be able to connect to MySQL as root using the new password.
A: An improvement to the most useful answer here:
1] No need to restart the mysql server
2] Security concern for a MySQL server connected to a network
There is no need to restart the MySQL server.
use FLUSH PRIVILEGES; after the update mysql.user statement for password change.
The FLUSH statement tells the server to reload the grant tables into memory so that it notices the password change.
The --skip-grant-options enables anyone to connect without a password and with all privileges. Because this is insecure, you might want to
use --skip-grant-tables in conjunction with --skip-networking to prevent remote clients from connecting.
from: reference: resetting-permissions-generic
A:
Stop the MySQL process.
Start the MySQL process with the --skip-grant-tables option.
Start the MySQL console client with the -u root option.
List all the users;
SELECT * FROM mysql.user;
Reset password;
UPDATE mysql.user SET Password=PASSWORD('[password]') WHERE User='[username]';
But DO NOT FORGET to
Stop the MySQL process
Start the MySQL Process normally (i.e. without the --skip-grant-tables option)
when you are finished. Otherwise, your database's security could be compromised.
A: After MySQL 5.7.6 and MariaDB 10.1.20 (currently in 2022) you can:
Update a mysql user password having access to root user
ALTER USER 'some_user_name'@'localhost' IDENTIFIED BY 'a_super_secure_password';
Update mysql root user
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password by 'mynewpassword';
List all users
select user from mysql.user;
A: IF you happen to have ODBC set up, you can get the password from the ODBC config file. This is in /etc/odbc.ini for Linux and in the Software/ODBC folder in the registry in Windows (there are several - it may take some hunting)
A: Save the file. For this example, the file will be named C:\mysql-init.txt.
it asking administrative permisions for saving the file
A: Although a strict, logical, computer science'ish interpretation of the op's question would be to require both "How do I retrieve my MySQL username" and "password" - I thought It might be useful to someone to also address the OR interpretation. In other words ...
1) How do I retrieve my MySQL username?
OR
2) password
This latter condition seems to have been amply addressed already so I won't bother with it. The following is a solution for the case "How do i retreive my MySQL username" alone. HIH.
To find your mysql username run the following commands from the mysql shell ...
SELECT User FROM mysql.user;
it will print a table of all mysql users.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "185"
} |
Q: Drop all tables whose names begin with a certain string How can I drop all tables whose names begin with a given string?
I think this can be done with some dynamic SQL and the INFORMATION_SCHEMA tables.
A: EXEC sp_MSforeachtable 'if PARSENAME("?",1) like ''%CertainString%'' DROP TABLE ?'
Edit:
sp_MSforeachtable is undocumented hence not suitable for production because it's behavior may vary depending on MS_SQL version.
A: Here is my solution:
SELECT CONCAT('DROP TABLE `', TABLE_NAME,'`;')
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'TABLE_PREFIX_GOES_HERE%';
And of course you need to replace TABLE_PREFIX_GOES_HERE with your prefix.
A: I saw this post when I was looking for mysql statement to drop all WordPress tables based on @Xenph Yan here is what I did eventually:
SELECT CONCAT( 'DROP TABLE `', TABLE_NAME, '`;' ) AS query
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'wp_%'
this will give you the set of drop queries for all tables begins with wp_
A: CREATE PROCEDURE usp_GenerateDROP
@Pattern AS varchar(255)
,@PrintQuery AS bit
,@ExecQuery AS bit
AS
BEGIN
DECLARE @sql AS varchar(max)
SELECT @sql = COALESCE(@sql, '') + 'DROP TABLE [' + TABLE_NAME + ']' + CHAR(13) + CHAR(10)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE @Pattern
IF @PrintQuery = 1 PRINT @sql
IF @ExecQuery = 1 EXEC (@sql)
END
A: Xenph Yan's answer was far cleaner than mine but here is mine all the same.
DECLARE @startStr AS Varchar (20)
SET @startStr = 'tableName'
DECLARE @startStrLen AS int
SELECT @startStrLen = LEN(@startStr)
SELECT 'DROP TABLE ' + name FROM sysobjects
WHERE type = 'U' AND LEFT(name, @startStrLen) = @startStr
Just change tableName to the characters that you want to search with.
A: This worked for me.
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT @sql += '
DROP TABLE '
+ QUOTENAME(s.name)
+ '.' + QUOTENAME(t.name) + ';'
FROM sys.tables AS t
INNER JOIN sys.schemas AS s
ON t.[schema_id] = s.[schema_id]
WHERE t.name LIKE 'something%';
PRINT @sql;
-- EXEC sp_executesql @sql;
A: This will get you the tables in foreign key order and avoid dropping some of the tables created by SQL Server. The t.Ordinal value will slice the tables into dependency layers.
WITH TablesCTE(SchemaName, TableName, TableID, Ordinal) AS
(
SELECT OBJECT_SCHEMA_NAME(so.object_id) AS SchemaName,
OBJECT_NAME(so.object_id) AS TableName,
so.object_id AS TableID,
0 AS Ordinal
FROM sys.objects AS so
WHERE so.type = 'U'
AND so.is_ms_Shipped = 0
AND OBJECT_NAME(so.object_id)
LIKE 'MyPrefix%'
UNION ALL
SELECT OBJECT_SCHEMA_NAME(so.object_id) AS SchemaName,
OBJECT_NAME(so.object_id) AS TableName,
so.object_id AS TableID,
tt.Ordinal + 1 AS Ordinal
FROM sys.objects AS so
INNER JOIN sys.foreign_keys AS f
ON f.parent_object_id = so.object_id
AND f.parent_object_id != f.referenced_object_id
INNER JOIN TablesCTE AS tt
ON f.referenced_object_id = tt.TableID
WHERE so.type = 'U'
AND so.is_ms_Shipped = 0
AND OBJECT_NAME(so.object_id)
LIKE 'MyPrefix%'
)
SELECT DISTINCT t.Ordinal, t.SchemaName, t.TableName, t.TableID
FROM TablesCTE AS t
INNER JOIN
(
SELECT
itt.SchemaName AS SchemaName,
itt.TableName AS TableName,
itt.TableID AS TableID,
Max(itt.Ordinal) AS Ordinal
FROM TablesCTE AS itt
GROUP BY itt.SchemaName, itt.TableName, itt.TableID
) AS tt
ON t.TableID = tt.TableID
AND t.Ordinal = tt.Ordinal
ORDER BY t.Ordinal DESC, t.TableName ASC
A: You may need to modify the query to include the owner if there's more than one in the database.
DECLARE @cmd varchar(4000)
DECLARE cmds CURSOR FOR
SELECT 'drop table [' + Table_Name + ']'
FROM INFORMATION_SCHEMA.TABLES
WHERE Table_Name LIKE 'prefix%'
OPEN cmds
WHILE 1 = 1
BEGIN
FETCH cmds INTO @cmd
IF @@fetch_status != 0 BREAK
EXEC(@cmd)
END
CLOSE cmds;
DEALLOCATE cmds
This is cleaner than using a two-step approach of generate script plus run. But one advantage of the script generation is that it gives you the chance to review the entirety of what's going to be run before it's actually run.
I know that if I were going to do this against a production database, I'd be as careful as possible.
Edit Code sample fixed.
A: SELECT 'DROP TABLE "' + TABLE_NAME + '"'
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE '[prefix]%'
This will generate a script.
Adding clause to check existence of table before deleting:
SELECT 'IF OBJECT_ID(''' +TABLE_NAME + ''') IS NOT NULL BEGIN DROP TABLE [' + TABLE_NAME + '] END;'
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE '[prefix]%'
A: On Oracle XE this works:
SELECT 'DROP TABLE "' || TABLE_NAME || '";'
FROM USER_TABLES
WHERE TABLE_NAME LIKE 'YOURTABLEPREFIX%'
Or if you want to remove the constraints and free up space as well, use this:
SELECT 'DROP TABLE "' || TABLE_NAME || '" cascade constraints PURGE;'
FROM USER_TABLES
WHERE TABLE_NAME LIKE 'YOURTABLEPREFIX%'
Which will generate a bunch of DROP TABLE cascade constraints PURGE statements...
For VIEWS use this:
SELECT 'DROP VIEW "' || VIEW_NAME || '";'
FROM USER_VIEWS
WHERE VIEW_NAME LIKE 'YOURVIEWPREFIX%'
A: select 'DROP TABLE ' + name from sysobjects
where type = 'U' and sysobjects.name like '%test%'
-- Test is the table name
A: SELECT 'if object_id(''' + TABLE_NAME + ''') is not null begin drop table "' + TABLE_NAME + '" end;'
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE '[prefix]%'
A: I had to do a slight derivation on Xenph Yan's answer I suspect because I had tables not in the default schema.
SELECT 'DROP TABLE Databasename.schema.' + TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'strmatch%'
A: In case of temporary tables, you might want to try
SELECT 'DROP TABLE "' + t.name + '"'
FROM tempdb.sys.tables t
WHERE t.name LIKE '[prefix]%'
A: I would like to post my proposal of the solution which DROP (not just generate and select a drop commands) all tables based on the wildcard (e.g. "table_20210114") older than particular amount of days.
DECLARE
@drop_command NVARCHAR(MAX) = '',
@system_time date,
@table_date nvarchar(8),
@older_than int = 7
Set @system_time = (select getdate() - @older_than)
Set @table_date = (SELECT CONVERT(char(8), @system_time, 112))
SELECT @drop_command += N'DROP TABLE ' + QUOTENAME(SCHEMA_NAME(schema_id)) + '.' + QUOTENAME([Name]) + ';'
FROM <your_database_name>.sys.tables
WHERE [Name] LIKE 'table_%' AND RIGHT([Name],8) < @table_date
SELECT @drop_command
EXEC sp_executesql @drop_command
A: If your query returns more than one line, you can collect the results and merge them into a query.
declare @Tables as nvarchar(max) = '[schemaName].['
select @Tables =@Tables + TABLE_NAME +'],[schemaName].['
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE='BASE TABLE'
AND TABLE_SCHEMA = 'schemaName'
AND TABLE_NAME like '%whateverYourQueryIs%'
select @Tables = Left(@Tables,LEN(@Tables)-13) --trying to remove last ",[schemaName].[" part, so you need to change this 13 with actual lenght
--print @Tables
declare @Query as nvarchar(max) = 'Drop table ' +@Tables
--print @Query
exec sp_executeSQL @Query
A: Try following code:
declare @TableLst table(TblNames nvarchar(500))
insert into @TableLst (TblNames)
SELECT 'DROP TABLE [' + Table_Name + ']'
FROM INFORMATION_SCHEMA.TABLES
WHERE Table_Name LIKE 'yourFilter%'
WHILE ((select COUNT(*) as CntTables from @TableLst) > 0)
BEGIN
declare @ForExecCms nvarchar(500) = (select top(1) TblNames from @TableLst)
EXEC(@ForExecCms)
delete from @TableLst where TblNames = @ForExecCms
END
This SQL script is executed without using a cursor.
A: If you suddenly need to delete tables linked by foreign keys.
USE [CentralIntake]
GO
DECLARE @name VARCHAR(200);
DECLARE @DropForeignKeyProcedure varchar(4000);
DECLARE @DropTableProcedure varchar(4000);
/*TEST*/ SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE Table_Name LIKE '%_unused'
DECLARE tb_cursor CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES WHERE Table_Name LIKE '%_unused';
OPEN tb_cursor
FETCH NEXT FROM tb_cursor INTO @name
WHILE @@FETCH_STATUS = 0
BEGIN
/*TEST*/ SELECT 'ALTER TABLE [' + OBJECT_SCHEMA_NAME(parent_object_id) + '].[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT [' + name + ']' FROM sys.foreign_keys WHERE referenced_object_id = object_id(@name)
DECLARE fk_cursor CURSOR FOR
(SELECT 'ALTER TABLE [' + OBJECT_SCHEMA_NAME(parent_object_id) + '].[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT [' + name + ']'
FROM sys.foreign_keys
WHERE referenced_object_id = object_id(@name));
OPEN fk_cursor
FETCH NEXT FROM fk_cursor INTO @DropForeignKeyProcedure
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC (@DropForeignKeyProcedure);
FETCH NEXT FROM fk_cursor INTO @DropForeignKeyProcedure
END
CLOSE fk_cursor
DEALLOCATE fk_cursor
SET @DropTableProcedure = (SELECT 'DROP TABLE [' + TABLE_CATALOG + '].[' + TABLE_SCHEMA + '].[' + @name + ']'
FROM INFORMATION_SCHEMA.TABLES
where TABLE_NAME = @name)
EXEC(@DropTableProcedure)
FETCH NEXT FROM tb_cursor INTO @name
END
CLOSE tb_cursor
DEALLOCATE tb_cursor
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "175"
} |
Q: Where can I get the Windows Workflow "wca.exe" application? I am walking through the MS Press Windows Workflow Step-by-Step book and in chapter 8 it mentions a tool with the filename "wca.exe". This is supposed to be able to generate workflow communication helper classes based on an interface you provide it. I can't find that file. I thought it would be in the latest .NET 3.5 SDK, but I just downloaded and fully installed, and it's not there. Also, some MSDN forum posts had links posted that just go to 404s. So, where can I find wca.exe?
A: Should be part of the .NET 3 SDK (and later version as well). If you've already installed this, the path might look something like
C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\wca.exe
More info on Guy Burstein's blog.
A: On my machine, with Visual Studio 2008 installed, it's in
C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I update Ruby Gems from behind a Proxy (ISA-NTLM) The firewall I'm behind is running Microsoft ISA server in NTLM-only mode. Hash anyone have success getting their Ruby gems to install/update via Ruby SSPI gem or other method?
... or am I just being lazy?
Note: rubysspi-1.2.4 does not work.
This also works for "igem", part of the IronRuby project
A: I tried all the above solutions, however none of them worked. If you're on linux/macOS i highly suggest using tsocks over an ssh tunnel. What you need in order to get this setup working is a machine where you can log in via ssh, and in addition to that a programm called tsocks installed.
The idea here is to create a dynamic tunnel via SSH (a socks5 proxy). We then configure tsocks to use this tunnel and to start our applications, in this case:
tsocks gem install ...
or to account for rails 3.0:
tsocks bundle install
A more detailed guide can be found under:
http://blog.byscripts.info/2011/04/bypass-a-proxy-with-ssh-tunnel-and-tsocks-under-ubuntu/
Despite being written for Ubuntu the procedure should be applicable for all Unix based machines. An alternative to tsocks for Windows is FreeCap (http://www.freecap.ru/eng/). A viable SSH client on windows is called putty.
A: Posts abound regarding this topic, and to help others save hours of trying different solutions, here is the final result of my hours of tinkering.
The three solutions around the internet at the moment are:
rubysspi
apserver
cntlm
rubysspi only works from a Windows machine, AFAIK, as it relies on the Win32Api library. So if you are on a Windows box trying to run through a proxy, this is the solution for you. If you are on a Linux distro, you're out of luck.
apserver seems to be a dead project. The link listed in the posts I've seen lead to 404 page on sourceforge. I search for "apserver" on sourceforge returns nothing.
The sourceforge link for cntlm that I've seen redirects to http://cntlm.awk.cz/, but that times out. A search on sourceforge turns up this link, which does work: http://sourceforge.net/projects/cntlm/
After downloading and configuring cntlm I have managed to install a gem through the proxy, so this seems to be the best solution for Linux distros.
A: A workaround is to install http://web.archive.org/web/20060913093359/http://apserver.sourceforge.net:80/ on your local machine, configure it and run gems through this proxy.
*
*Install: Just download apserver 097 (and not the experimental 098!) and unpack.
*Configure: Edit the server.cfg file and put the values for your MS proxy in PARENT_PROXY and PARENT_PROXY_PORT. Enter the values for DOMAIN and USER. Leave PASSWORD blank (nothing after the colon) – you will be prompted when launching it.
*Run apserver: cd aps097; python main.py
*Run Gems: gem install—http-proxy http://localhost:5865/ library
A: I've been using cntlm (http://cntlm.sourceforge.net/) at work. Configuration is very similar to ntlmaps.
*
*gem install --http-proxy http://localhost:3128 _name_of_gem_
Works great, and also allows me to connect my Ubuntu box to the ISA proxy.
Check out http://cntlm.wiki.sourceforge.net/ for more information
A: I am working behind a proxy and just installed SASS by downloading directly from http://rubygems.org.
I then ran sudo gem install [path/to/downloaded/gem/file]. I cannot say this will work for all gems, but it may help some people.
A: I tried some of these solutions, and none of them worked. I finally found a solution that works for me:
gem install -p http://proxy_ip:proxy_port rails
using the -p parameter to pass the proxy. I'm using Gem version 1.9.1.
A: If you are on a *nix system, use this:
export http_proxy=http://${proxy.host}:${port}
export https_proxy=http://${proxy.host}:${port}
and then try:
gem install ${gem_name}
A: This worked for me in a Windows box:
set HTTP_PROXY=http://server:port
set HTTP_PROXY_USER=username
set HTTP_PROXY_PASS=userparssword
set HTTPS_PROXY=http://server:port
set HTTPS_PROXY_USER=username
set HTTPS_PROXY_PASS=userpassword
I have a batch file with these lines that I use to set environment values when I need it.
The trick, in my case, was HTTPS_PROXY sets. Without them, I always got a 407 proxy authentication error.
A: For the Windows OS, I used Fiddler to work around the issue.
*
*Install/Run Fiddler from www.fiddler2.com
*Run gem:
$ gem install --http-proxy http://localhost:8888 $gem_name
A: I wasn't able to get mine working from the command-line switch but I have been able to do it just by setting my HTTP_PROXY environment variable. (Note that case seems to be important). I have a batch file that has a line like this in it:
SET HTTP_PROXY=http://%USER%:%PASSWORD%@%SERVER%:%PORT%
I set the four referenced variables before I get to this line obviously. As an example if my username is "wolfbyte", my password is "secret" and my proxy is called "pigsy" and operates on port 8080:
SET HTTP_PROXY=http://wolfbyte:secret@pigsy:8080
You might want to be careful how you manage that because it stores your password in plain text in the machine's session but I don't think it should be too much of an issue.
A: Create a .gemrc file (either in /etc/gemrc or ~/.gemrc or for example with chef gem in /opt/chef/embedded/etc/gemrc) containing:
http_proxy: http://proxy:3128
Then you can gem install as usual.
A: rubysspi-1.3.1 worked for me on Windows 7, using the instructions from this page:
http://www.stuartellis.eu/articles/installing-ruby/
A: This solved my problem perfectly:
gem install -p http://proxy_ip:proxy_port compass
You might need to add your user name and password to it:
gem install -p http://[username]:[password]@proxy_ip:proxy_port compass
A: This totally worked:
gem install --http-proxy http://COMPANY.PROXY.ADDRESS $gem_name
A: If you are having problems getting authenticated through your proxy, be sure to set the environment variables in exactly the format below:
set HTTP_PROXY=some.proxy.com
set HTTP_PROXY_USER=user
set HTTP_PROXY_PASS=password
The user:password@ syntax doesn't seem to work and there are also some badly named environment variables floating around on Stack Overflow and various forum posts.
Also be aware that it can take a while for your gems to start downloading. At first I thought it wasn't working but with a bit of patience they started downloading as expected.
A: Quick answer : Add proxy configuration with parameter for both install/update
gem install --http-proxy http://host:port/ package_name
gem update --http-proxy http://host:port/ package_name
A: Rather than editing batch files (which you may have to do for other Ruby gems, e.g. Bundler), it's probably better to do this once, and do it properly.
On Windows, behind my corporate proxy, all I had to do was add the HTTP_PROXY environment variable to my system.
*
*Start -> right click Computer -> Properties
*Choose "Advanced System Settings"
*Click Advanced -> Environment Variables
*Create a new System variable named "HTTP_PROXY", and set the Value to your proxy server
*Reboot or log out and back in again
Depending on your authentication requirements, the HTTP_PROXY value can be as simple as:
http://proxy-server-name
Or more complex as others have pointed out
http://username:password@proxy-server-name:port-number
A: If you want to use SOCKS5 proxy, you may try rubygems-socksproxy https://github.com/gussan/rubygems-socksproxy.
It works for me on OSX 10.9.3.
A: If behind a proxy, you can navigate to Ruby downloads, click on Download, which will download the specified update ( or Gem ) to a desired location.
Next, via Ruby command line, navigate to the downloaded location by using : pushd [directory]
eg : pushd D:\Setups
then run the following command: gem install [update name] --local
eg: gem install rubygems-update --local.
Tested on Windows 7 with Ruby update version 2.4.1.
To check use following command : ruby -v
A: for anyone tunnelling with SSH; you can create a version of the gem command that uses SOCKS proxy:
*
*Install socksify with gem install socksify (you'll need to be able to do this step without proxy, at least)
*Copy your existing gem exe
cp $(command which gem) /usr/local/bin/proxy_gem
*Open it in your favourite editor and add this at the top (after the shebang)
require 'socksify'
if ENV['SOCKS_PROXY']
require 'socksify'
host, port = ENV['SOCKS_PROXY'].split(':')
TCPSocket.socks_server = host || 'localhost'
TCPSocket.socks_port = port.to_i || 1080
end
*Set up your tunnel
ssh -D 8123 -f -C -q -N user@proxy
*Run your gem command with proxy_gem
SOCKS_PROXY=localhost:8123 proxy_gem push mygem
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "239"
} |
Q: How to easily consume a web service from PHP Is there available any tool for PHP which can be used to generate code for consuming a web service based on its WSDL? Something comparable to clicking "Add Web Reference" in Visual Studio or the Eclipse plugin which does the same thing for Java.
A: In PHP 5 you can use SoapClient on the WSDL to call the web service functions. For example:
$client = new SoapClient("some.wsdl");
and $client is now an object which has class methods as defined in some.wsdl. So if there was a method called getTime in the WSDL then you would just call:
$result = $client->getTime();
And the result of that would (obviously) be in the $result variable. You can use the __getFunctions method to return a list of all the available methods.
A: I've had great success with wsdl2php. It will automatically create wrapper classes for all objects and methods used in your web service.
A: This article explains how you can use PHP SoapClient to call a api web service.
A: Say you were provided the following:
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/" xmlns:int="http://thesite.com/">
<x:Header/>
<x:Body>
<int:authenticateLogin>
<int:LoginId>12345</int:LoginId>
</int:authenticateLogin>
</x:Body>
</x:Envelope>
and
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<authenticateLoginResponse xmlns="http://thesite.com/">
<authenticateLoginResult>
<RequestStatus>true</RequestStatus>
<UserName>003p0000006XKX3AAO</UserName>
<BearerToken>Abcdef1234567890</BearerToken>
</authenticateLoginResult>
</authenticateLoginResponse>
</s:Body>
</s:Envelope>
Let's say that accessing http://thesite.com/ said that the WSDL address is:
http://thesite.com/PortalIntegratorService.svc?wsdl
$client = new SoapClient('http://thesite.com/PortalIntegratorService.svc?wsdl');
$result = $client->authenticateLogin(array('LoginId' => 12345));
if (!empty($result->authenticateLoginResult->RequestStatus)
&& !empty($result->authenticateLoginResult->UserName)) {
echo 'The username is: '.$result->authenticateLoginResult->UserName;
}
As you can see, the items specified in the XML are used in the PHP code though the LoginId value can be changed.
A: I have used NuSOAP in the past. I liked it because it is just a set of PHP files that you can include. There is nothing to install on the web server and no config options to change. It has WSDL support as well which is a bonus.
A: Well, those features are specific to a tool that you are using for development in those languages.
You wouldn't have those tools if (for example) you were using notepad to write code. So, maybe you should ask the question for the tool you are using.
For PHP: http://webservices.xml.com/pub/a/ws/2004/03/24/phpws.html
A: HI I got this from this site : http://forums.asp.net/t/887892.aspx?Consume+an+ASP+NET+Web+Service+with+PHP
The web service has method Add which takes two params:
<?php
$client = new SoapClient("http://localhost/csharp/web_service.asmx?wsdl");
print_r( $client->Add(array("a" => "5", "b" =>"2")));
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: CSV string handling Typical way of creating a CSV string (pseudocode):
*
*Create a CSV container object (like a StringBuilder in C#).
*Loop through the strings you want to add appending a comma after each one.
*After the loop, remove that last superfluous comma.
Code sample:
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
sb.Remove(sb.Length - 1, 1);
//sb.Replace(",", "", sb.Length - 1, 1)
return sb.ToString();
}
I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence?
I feel that there should be an easier/cleaner/more efficient way of removing that last comma. Any ideas?
A: Your code not really compliant with full CSV format. If you are just generating CSV from data that has no commas, leading/trailing spaces, tabs, newlines or quotes, it should be fine. However, in most real-world data-exchange scenarios, you do need the full imlementation.
For generation to proper CSV, you can use this:
public static String EncodeCsvLine(params String[] fields)
{
StringBuilder line = new StringBuilder();
for (int i = 0; i < fields.Length; i++)
{
if (i > 0)
{
line.Append(DelimiterChar);
}
String csvField = EncodeCsvField(fields[i]);
line.Append(csvField);
}
return line.ToString();
}
static String EncodeCsvField(String field)
{
StringBuilder sb = new StringBuilder();
sb.Append(field);
// Some fields with special characters must be embedded in double quotes
bool embedInQuotes = false;
// Embed in quotes to preserve leading/tralining whitespace
if (sb.Length > 0 &&
(sb[0] == ' ' ||
sb[0] == '\t' ||
sb[sb.Length-1] == ' ' ||
sb[sb.Length-1] == '\t' ))
{
embedInQuotes = true;
}
for (int i = 0; i < sb.Length; i++)
{
// Embed in quotes to preserve: commas, line-breaks etc.
if (sb[i] == DelimiterChar ||
sb[i]=='\r' ||
sb[i]=='\n' ||
sb[i] == '"')
{
embedInQuotes = true;
break;
}
}
// If the field itself has quotes, they must each be represented
// by a pair of consecutive quotes.
sb.Replace("\"", "\"\"");
String rv = sb.ToString();
if (embedInQuotes)
{
rv = "\"" + rv + "\"";
}
return rv;
}
Might not be world's most efficient code, but it has been tested. Real world sucks compared to quick sample code :)
A: Don't forget our old friend "for". It's not as nice-looking as foreach but it has the advantage of being able to start at the second element.
public string ReturnAsCSV(ContactList contactList)
{
if (contactList == null || contactList.Count == 0)
return string.Empty;
StringBuilder sb = new StringBuilder(contactList[0].Name);
for (int i = 1; i < contactList.Count; i++)
{
sb.Append(",");
sb.Append(contactList[i].Name);
}
return sb.ToString();
}
You could also wrap the second Append in an "if" that tests whether the Name property contains a double-quote or a comma, and if so, escape them appropriately.
A: Why not use one of the open source CSV libraries out there?
I know it sounds like overkill for something that appears so simple, but as you can tell by the comments and code snippets, there's more than meets the eye. In addition to handling full CSV compliance, you'll eventually want to handle both reading and writing CSVs... and you may want file manipulation.
I've used Open CSV on one of my projects before (but there are plenty of others to choose from). It certainly made my life easier. ;)
A: You could instead add the comma as the first thing inside your foreach.
if (sb.Length > 0) sb.Append(",");
A: You could also make an array of c.Name data and use String.Join method to create your line.
public string ReturnAsCSV(ContactList contactList)
{
List<String> tmpList = new List<string>();
foreach (Contact c in contactList)
{
tmpList.Add(c.Name);
}
return String.Join(",", tmpList.ToArray());
}
This might not be as performant as the StringBuilder approach, but it definitely looks cleaner.
Also, you might want to consider using .CurrentCulture.TextInfo.ListSeparator instead of a hard-coded comma -- If your output is going to be imported into other applications, you might have problems with it. ListSeparator may be different across different cultures, and MS Excel at the very least, honors this setting. So:
return String.Join(
System.Globalization.CultureInfo.CurrentCulture.TextInfo.ListSeparator,
tmpList.ToArray());
A: You could use LINQ to Objects:
string [] strings = contactList.Select(c => c.Name).ToArray();
string csv = string.Join(",", strings);
Obviously that could all be done in one line, but it's a bit clearer on two.
A:
I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence?
You're prematurely optimizing, the performance hit would be negligible.
A: Just a thought, but remember to handle comma's and quotation marks (") in the field values, otherwise your CSV file may break the consumers reader.
A: I've used this method before. The Length property of StringBuilder is NOT readonly so subtracting it by one means truncate the last character. But you have to make sure your length is not zero to start with (which would happen if your list is empty) because setting the length to less than zero is an error.
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
if (sb.Length > 0)
sb.Length -= 1;
return sb.ToString();
}
A: I wrote a small class for this in case someone else finds it useful...
public class clsCSVBuilder
{
protected int _CurrentIndex = -1;
protected List<string> _Headers = new List<string>();
protected List<List<string>> _Records = new List<List<string>>();
protected const string SEPERATOR = ",";
public clsCSVBuilder() { }
public void CreateRow()
{
_Records.Add(new List<string>());
_CurrentIndex++;
}
protected string _EscapeString(string str)
{
return string.Format("\"{0}\"", str.Replace("\"", "\"\"")
.Replace("\r\n", " ")
.Replace("\n", " ")
.Replace("\r", " "));
}
protected void _AddRawString(string item)
{
_Records[_CurrentIndex].Add(item);
}
public void AddHeader(string name)
{
_Headers.Add(_EscapeString(name));
}
public void AddRowItem(string item)
{
_AddRawString(_EscapeString(item));
}
public void AddRowItem(int item)
{
_AddRawString(item.ToString());
}
public void AddRowItem(double item)
{
_AddRawString(item.ToString());
}
public void AddRowItem(DateTime date)
{
AddRowItem(date.ToShortDateString());
}
public static string GenerateTempCSVPath()
{
return Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString().ToLower().Replace("-", "") + ".csv");
}
protected string _GenerateCSV()
{
StringBuilder sb = new StringBuilder();
if (_Headers.Count > 0)
{
sb.AppendLine(string.Join(SEPERATOR, _Headers.ToArray()));
}
foreach (List<string> row in _Records)
{
sb.AppendLine(string.Join(SEPERATOR, row.ToArray()));
}
return sb.ToString();
}
public void SaveAs(string path)
{
using (StreamWriter sw = new StreamWriter(path))
{
sw.Write(_GenerateCSV());
}
}
}
A: How about tracking whether you are on the first item, and only add a comma before the item if it is not the first one.
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
bool isFirst = true;
foreach (Contact c in contactList) {
if (!isFirst) {
// Only add comma before item if it is not the first item
sb.Append(",");
} else {
isFirst = false;
}
sb.Append(c.Name);
}
return sb.ToString();
}
A: How about some trimming?
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
return sb.ToString().Trim(',');
}
A: I use CSVHelper - it's a great open-source library that lets you generate compliant CSV streams one element at a time or custom-map your classes:
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
using (StringWriter stringWriter = new StringWriter(sb))
{
using (var csvWriter = new CsvHelper.CsvWriter(stringWriter))
{
csvWriter.Configuration.HasHeaderRecord = false;
foreach (Contact c in contactList)
{
csvWriter.WriteField(c.Name);
}
}
}
return sb.ToString();
}
or if you map then something like this: csvWriter.WriteRecords<ContactList>(contactList);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Can I configure Visual Studio NOT to change StartUp Project every time I open a file from one of the projects? Let's say that there is a solution that contains two projects (Project1 and Project2).
Project1 is set as a StartUp Project (its name is displayed in a bold font). I double-click some file in Project2 to open it. The file opens, but something else happens too - Project2 gets set as a StartUp Project.
I tried to find an option in configuration to change it, but I found none.
Can this feature (though it's more like a bug to me) be disabled?
A: Check your Visual Studio options for the following check box:
Projects and Solutions - Build and Run - For new solutions use the currently selected project as the startup project.
Uncheck that and see if the behavior changes.
A: The way to select a startup project is described in Sara Ford's blog "Visual Studio Tip of the Day" (highly recommended). She has a post there about setting up StartUp projects. Essentially there are 2 ways, the easiest one being right-clicking on the desired project, and choosing "Set As StartUp Project". That prevents other projects from becoming the StartUp project, even if you click on one their files.
A: I ran into a bug where the project in bold would not be the startup project despite it being selected in the solution properties as the "single startup project".
One work around for this bug was un-checking deploy, from the Configuration Manager, for the non-bold project that was being incorrectly used as the startup project. The configuration manager is found by right clicking the solution in the Solution Explorer.
A: If ReShaper is in use.
In my case i used ReShaper and I can change this with "CTR + Shift + Alt + R
You can set there or delete and more the Start Settings. But bevore this time I have only set the Start Projekt via VS and ReShaper was installed too!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Domain Specific Language resources I was just listening to some older .Net Rocks! episodes, and I found #329 on DSLs to be interesting. My problem is that I can't find any good online resources for people trying to learn this technology. I get the basics of the creating new designers, but the MS docs on the T4 engine used by the DSL tools and then how to integrate the templates with the DSL models are lacking.
Does anyone know of some good introductory resources for the MS DSL tools?
A: The architects of the DSL Tools team wrote a book, Domain-Specific Development with Visual Studio DSL Tools. The book's website has some other links and resources.
A: If you are interested in DSLs, Jeff Moser has written some great articles about them (and the 'meta' frame of mind you need) here, here, and here on his blog.
A: Martin Fowler is currently writing a book on DSL. Here is a presentation he gave on the topic.
A: For me the best source of T4 examples was this blog.
A: Since you're looking to the MS-world, you may want to look at F#. It offers the ability to extend its syntax to write domain specific languages (see this link, page 16 for sample code).
A: I found the following page with a number of webcasts very usefull:
http://msdn.microsoft.com/en-us/vsx/cc677256.aspx
A: A fantastic option for DSLs is Boo. I've been using it for things like setting up my IoC container, defining routes, validation rules. Ayende Rahien is writing an fantastic book on the subject for Manning called Building Domain Specific Languages in Boo
A: *
*PODCAST: DSL Related Discussions in SE-Radio
A: Martin Fowler is writing a book on DSLs. You can read his work so far here http://www.martinfowler.com/dslwip/
I also went to a good presentation by Jay Fields (His slides are here).
A: I would recommend http://msdn.microsoft.com/en-us/vsx/cc677256.aspx for DSL Tools as a starter.
Also, check out the concept of MDSD (Model Driven Development).
An expert on that topic (and DSL's) is Markus Voelter: http://www.voelter.de/
I believe there are so many similarities between MDSD, Software Production Lines and DSL's in general that this 'new' way of doing things needs to clean up it's concepts.
That's one of the reasons why it's hard to find good information about the topic.
On another note, acm.org has an extensive digital library of research articles, articles from various conferences (such as OOPSLA), where you can find much information about DSL's, language designs, SPL, MDSD, and so forth.
A: Here's a few more websites that I find useful:
*
*Advanced code generation patterns
*DslFactoryUtilities
*DslTools
A: For the Visual Studio DSL Tools (tooling to add graphical DSLs to Visual Studio), there's an introductory hands on lab here: http://code.msdn.microsoft.com/Visualization-and-Modeling-313535db
The homepage for the tooling with links to other samples is here: http://archive.msdn.microsoft.com/vsvmsdk
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to know when to send a 304 Not Modified response I'm writing a resource handling method where I control access to various files, and I'd like to be able to make use of the browser's cache. My question is two-fold:
*
*Which are the definitive HTTP headers that I need to check in order to know for sure whether I should send a 304 response, and what am I looking for when I do check them?
*Additionally, are there any headers that I need to send when I initially send the file (like 'Last-Modified') as a 200 response?
Some psuedo-code would probably be the most useful answer.
What about the cache-control header? Can the various possible values of that affect what you send to the client (namely max-age) or should only if-modified-since be obeyed?
A: Here's how I implemented it. The code has been working for a bit more than a year and with multiple browsers, so I think it's pretty reliable. This is based on RFC 2616 and by observing what and when the various browsers were sending.
Here's the pseudocode:
server_etag = gen_etag_for_this_file(myfile)
etag_from_browser = get_header("Etag")
if etag_from_browser does not exist:
etag_from_browser = get_header("If-None-Match")
if the browser has quoted the etag:
strip the quotes (e.g. "foo" --> foo)
set server_etag into http header
if etag_from_browser matches server_etag
send 304 return code to browser
Here's a snippet of my server logic that handles this.
/* the client should set either Etag or If-None-Match */
/* some clients quote the parm, strip quotes if so */
mketag(etag, &sb);
etagin = apr_table_get(r->headers_in, "Etag");
if (etagin == NULL)
etagin = apr_table_get(r->headers_in, "If-None-Match");
if (etag != NULL && etag[0] == '"') {
int sl;
sl = strlen(etag);
memmove(etag, etag+1, sl+1);
etag[sl-2] = 0;
logit(2,"etag=:%s:",etag);
}
...
apr_table_add(r->headers_out, "ETag", etag);
...
if (etagin != NULL && strcmp(etagin, etag) == 0) {
/* if the etag matches, we return a 304 */
rc = HTTP_NOT_MODIFIED;
}
If you want some help with etag generation post another question and I'll dig out some code that does that as well. HTH!
A: A 304 Not Modified response can result from a GET or HEAD request with either an If-Modified-Since ("IMS") or an If-Not-Match ("INM") header.
In order to decide what to do when you receive these headers, imagine that you are handling the GET request without these conditional headers. Determine what the values of your ETag and Last-Modified headers would be in that response and use them to make the decision. Hopefully you have built your system such that determining this is less costly than constructing the complete response.
If there is an INM and the value of that header is the same as the value you would place in the ETag, then respond with 304.
If there is an IMS and the date value in that header is later than the one you would place in the Last-Modified, then respond with 304.
Else, proceed as though the request did not contain those headers.
For a least-effort approach to part 2 of your question, figure out which of the (Expires, ETag, and Last-Modified) headers you can easily and correctly produce in your Web application.
For suggested reading material:
http://www.w3.org/Protocols/rfc2616/rfc2616.html
http://www.mnot.net/cache_docs/
A: You should send a 304 if the client has explicitly stated that it may already have the page in its cache. This is called a conditional GET, which should include the if-modified-since header in the request.
Basically, this request header contains a date from which the client claims to have a cached copy. You should check if content has changed after this date and send a 304 if it hasn't.
See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.25 for the related section in the RFC.
A: We are also handling cached, but secured, resources. If you send / generate an ETAg header (which RFC 2616 section 13.3 recommends you SHOULD), then the client MUST use it in a conditional request (typically in an If-None-Match - HTTP_IF_NONE_MATCH - header). If you send a Last-Modified header (again you SHOULD), then you should check the If-Modified-Since - HTTP_IF_MODIFIED_SINCE - header. If you send both, then the client SHOULD send both, but it MUST send the ETag. Also note that validtion is just defined as checking the conditional headers for strict equality against the ones you would send out. Also, only a strong validator (such as an ETag) will be used for ranged requests (where only part of a resource is requested).
In practice, since the resources we are protecting are fairly static, and a one second lag time is acceptable, we are doing the following:
*
* Check to see if the user is authorized to access the requested resource
If they are not, Redirect them or send a 4xx response as appropriate. We will generate 404 responses to requests that look like hack attempts or blatant tries to perform a security end run.
* Compare the If-Modified-Since header to the Last-Modified header we would send (see below) for strict equality
If they match, send a 304 Not Modified response and exit page processing
* Create a Last-Modified header using the modification time of the requested resource
Look up the HTTP Date format in RFC 2616
* Send out the header and resource content along with an appropriate Content-Type
We decided to eschew the ETag header since it is overkill for our purposes. I suppose we could also just use the date time stamp as an ETag. If we move to a true ETag system, we would probably store computed hashes for the resources and use those as ETags.
If your resources are dynamically generated, from say database content, then ETags may be better for your needs, since they are just text to be populated as you see fit.
A: regarding cache-control:
You shouldn't have to worry about the cache-control when serving out, other than setting it to a reasonable value. It's basically telling the browser and other downstream entities (such as a proxy) the maximum time that should elapse before timing out the cache.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: MAPI and managed code experiences? Using MAPI functions from within managed code is officially unsupported. Apparently, MAPI uses its own memory management and it crashes and burns within managed code (see here and here)
All I want to do is launch the default e-mail client with subject, body, AND one or more attachments.
So I've been looking into MAPISendDocuments and it seems to work. But I haven't been able to gather courage to actually use the function in production code.
Has anybody used this function a lot? Do you have any horror stories?
PS. No, I won't shellExecute Outlook.exe with command line arguments for attachments.
PPS. Attachment support is a requirement , so Mailto: solutions do not cut it for me.
A: Have a separate helper EXE that takes command-line params (or pipe to its StandardInput) that does what is required and call that from your main app. This keeps the MAPI stuff outside of your main app's process space. OK, you're still mixing MAPI and .NET but in a very short-lived process. The assumption is that MAPI and the CLR start causing issues with longer-running processes.
We use Dmitry Streblechenko's superb Redemption Data Objects library which allows us to write such "shim" code in JScript and invoke that, which keeps the CLR and MAPI worlds in separate processes, but in a supported fashion.
@Chris Fournier re. writing an unmanaged DLL. This won't work because the issue is mixing MAPI and managed code in the same process.
A: Calling process.Start on the Mailto: protocol (as shown below) will give you basic functionality but not attachments.
Process.Start("mailto:name@domain.com?subject=TestCode&Body=Test Text");
You can do this approach with attachment paths but this option only works with some old version of outlook such as 98. I assume this is due to the potential securty risk.
If anyone does use outlook.exe it will give security warnings under outlook 2003 (and 2007 Dependant on settings).
A: MAPISendDocuments is deprecated and might be removed.
You should use MAPISendMail instead.
See Simple MAPI
A: You should be able to make an unmanaged DLL that performs the operations you want using MAPI, and then invoke that DLL from your managed code. I wouldn't write a straight MAPI wrapper, but something that performs all of the functionality you require of MAPI contained in that unmanaged DLL. That would probably be the safest way to use MAPI from managed code.
A: You could also use Outlook Redemption, which is supported from managed code; I'm not immediately sure if it has a simple MAPISendDocuments replacement, but Dmitry's helpful if you have questions.
As for "crashes and burns", here's another quote from an MS support guy, here
It's the sort of thing that'll mostly work. It'll work while you're writing it. Then it'll work while you're testing it. It'll work while your customer is evaluating it. Then as soon as the customer deploys it - BAM! That's when it'll decide to start having problems. And Microsoft ain't gonna help you with it, since we told you not to do it in the first place. :)
A: I have done this using the MAPISendMail function and several internal classes to wrap some of the other MAPI related structures. As long as this is the only use, it is possible although not trivial to do safely as it requires a very close attention to the various unmanaged data types and memory allocation/deallocation and GC. While it still isn't supported, I am using this in production code (although it hasn't shipped yet).
When I asked Matt Stehle about this, the response I received was:
I really don't know of a much better way to do this and any issues you ran into here would be probably reproducible in a supported scenario (i.e. VB6 or unmanaged C++). Just know that if you ever ran into a scenario were an issue was caused specifically by this function being called from .NET that we wouldn't have any other recommendation for you then to not use .NET.
Not exactly a blessing on using it, but also not saying there are any other options to actually do this from managed code.
A: The following code doesn't use MAPI as such, but it does open the "Compose Mail" window with arbitrary attachments.
(actually, it's entirely untested but I dug it up in an application that I believe to have worked)
using Microsoft.Office;
using Microsoft.Office.Core;
...
Outlook.Application outlook = new Outlook.Application();
Outlook.MailItem mail = (Outlook.MailItem) outlook.CreateItem(Outlook.OlItemType.olMailItem);
mail.BodyFormat = Outlook.OlBodyFormat.olFormatRichText;
mail.HTMLBody = "stuff";
mail.Subject = "more stuff";
string file = File.ReadAllBytes(...);
mail.Attachments.Add(file, Outlook.OlAttachmentType.olByValue, 1, file)
mail.Display(false);
A: For someone experienced with MAPI, it would take them less time to crank out the code to do exactly what you want from unmanaged code (read: plain C++) than typing this post and reading the response (no offense).
You're lucky the functionality you need is limited. All you need is a simple C++ utility to take the params you need on the command-line and issue the right MAPI calls. Then, you all this utility from your managed code just as you'd to execute any other process.
HTH
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Is there any trick that allows to use Management Studio's (ver. 2008) IntelliSense feature with earlier versions of SQL Server? New version of Management Studio (i.e. the one that ships with SQL Server 2008) finally has a Transact-SQL IntelliSense feature. However, out-of-the-box it only works with SQL Server 2008 instances.
Is there some workaround for this?
A: There's no known trick 'in the wild' for getting around this, other than using CTP-6 of SQL Server 2008 (in favour of the RTM).
the reasons for removing backward compatability (and a lot more discussion besides) are provided at the relevant feedback in microsoft connect.
edit: sorry i don't know where this ctp is available, if at all
A: Has anyone tried either patching SSMS not to check the version (perhaps try looking at the binary differences between CTP 6 and RTM?), or patching SS 2005 to pretend to be 2008?
Unclean, I know, but I don't see any other way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Using Xming X Window Server over a VPN I have the Xming X Window Server installed on a laptop running Windows XP to connect to some UNIX development servers.
It works fine when I connect directly to the company network in the office. However, it does not work when I connect to the network remotely over a VPN.
When I start Xming when connected remotely none of my terminal Windows are displayed.
I think it may have something to do with the DISPLAY environment variable not being set correctly to the IP address of the laptop when it is connected.
I've noticed that when I do an ipconfig whilst connected remotely that my laptop has two IP addresses, the one assigned to it from the company network and the local IP address I've set up for it on my "local network" from my modem/router.
Are there some configuration changes I need to make in Xming to support its use through the VPN?
A: Chances are it's either X authentication, the X server binding to an interface, or your DISPLAY variable. I don't use Xming myself but there are some general phenomenon to check for. One test you can do to manually verify the DISPLAY variable is correct is:
*
*Start your VPN. Run ipconfig to be sure you have the two IP addresses you mentioned (your local IP and your VPN IP).
*Start Xming. Run 'netstat -n' to see how it's binding to the interface. You should see something that either says localIP:6000 or VPNIP:6000. It may not be 6000 but chances are it will be something like that. If there's no VPNIP:6000 it may be binding only to your localIP or even 127.0.0.1. That will probably not work over the VPN. Check if there are some Xming settings to make it bind to other or all interfaces.
*If you see VPNIP:6000 or something similar, take note of what it says and remote shell into your UNIX host (hopefully something like ssh, if not whatever you have to get a text terminal).
*On the UNIX terminal type 'echo $DISPLAY'. If there is nothing displayed try 'export DISPLAY=VPNIP:0.0' where VPNIP is your VPN IP address and 0.0 is the port you saw in step 3 minus 6000 with .0 at the end (i.e. 6000 = 0.0, 6010 = 10.0).
*On the UNIX host run something like 'xclock' or 'xterm' to see if it runs. The error message should be informative. It will tell you that it either couldn't connect to the host (a connectivity problem) or authentication failed (you'll need to coordinate Xauth on your host and local machine or Xhosts on your local machine).
Opening Xhosts (with + for all hosts or something similar) isn't too bad if you have a locally protected network and you're going over a VPN. Hopefully this will get you started tracking down the problem. Another option that is often useful as it works over a VPN or simple ssh connectivity is ssh tunneling or X11 forwarding over ssh. This simulates connectivity to the X server on your local box by redirecting a port on your UNIX host to the local port on your X server box. Your display will typically be something like localhost:10.0 for the local 6010 port.
X can be ornery to set up but it usually works great once you get the hang of it.
A: Thanks for the help @Stephen and @Greg Castle, using it I've managed to resolve my problem.
To provide a basic guide for others (from scratch):
Using Xwindows on a Windows PC to connect to a UNIX server over a VPN
What you need to start with:
*
*The Putty Telnet/SSH client, download putty.exe (for free) from:
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
*The Xming X server, download Xming (for free) from:
http://sourceforge.net/project/showfiles.php?group_id=156984
What to do:
*
*Install both of the above on your Windows PC
*From the Windows start menu select: Programs -> Xming -> Xming
*Run the Putty.exe program in the location you downloaded it to
*In the PuTTY configuration screen do the following:
*
*Set the IP address to be the IP address of your UNIX server
*Select the SSH Protocol radio-button
*Click the SSH : Tunnels category in the left hand pane of the configuration screen
*Click the Enable X11 forwarding check-box
*Click the Open button
*Logon as usual to your UNIX server
*Check the directory containing the X windows utilities are in your path, e.g. /usr/X/bin on Solaris
*Run your X Windows commands in your putty window and they will spawn new windows on your desktop
A: I got Xming and PuTTY working with Cisco VPN by replacing the PuTTY configuration in Connection > SSH > X11 > X display location, localhost:0.0, with VPNIP:0.0. VPNIP can be seen in the VPN statistics client address information by left-clicking on the VPN client lock icon and choose Statistics....
I didn't muck with the DISPLAY environment variable on the remote host. But, like others, I modified sshd_config on the remote host, adding these lines:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
AddressFamily inet
A: I have got same issue with Xming and Putty on a Windows 10 machine and found the solution here. I have overcome the problem just adding Tunnels to the session in PuTTY. But first you need to check;
*
*sshd_config under /etc/ssh (in rhel7).
*Enable X11 forwarding at left navigation pane Connections > SSH > X11
*iptables under /etc/sysconfig/ (in rhel7). If ports are blocked and you have permission, open the ports for 6000. I have added below line before first reject line to open ports from 6000 to 6003. It may be more specific in your case.
-A INPUT -m state --state NEW -m tcp -p tcp -m multiport --dports 5901:5903,6000:6003 -j ACCEPT
Then;
*
*Go to Connections > SSH > Tunnels in PuTTY and add a tunnel with Source Port=6000, Destination=127.0.0.1:6000 and check Remote radio button. Then click the Add button.
*After your SSH connection established, set your DISPLAY variable manually with the command below:
export DISPLAY=127.0.0.1:0.0
More Information;
If you set DISPLAY variable as 127.0.0.1:1.0, it will communicate over 6001 port . In this case, you need to add another tunnel for port number 6001.
A: I had nothing but problems with Xming. When I could get it to work it was extremely slow (this is over a VPN). IMO X is not designed to run over slow connections its too chatty. And by slow connection I mean anything less then a LAN connection.
My solution was to use x11vnc. It lets you access your existing X11 session through VNC. I just ssh into my box through the VPN and launch:
$ x11vnc -display :0
That way I can access everything I had opened during the day. Then when I don't I just exit (Ctrl-C) in the terminal to close x11vnc.
A: Haven't have the exact problem, but I think you need to look at the xhost and make sure that the vpn remote is allowed to send data to the x server.
This link might help:
http://www.straightrunning.com/XmingNotes/trouble.php
A: You may have better luck doing X11 Forwarding through SSH rather than fiddling with your DISPLAY variable directly. X11 Forwarding with SSH is secure and uses the existing SSH connection to tunnel, so working through a VPN should be no problem.
Fortunately this is fairly straightforward with Xming. If you open your connection from within Xming (e.g. the plink option) I believe it sets up X11 forwarding by default. If you connect using another SSH client (e.g. PuTTY) then you simply need to enable X11 forwarding (e.g. 'ssh -X user@host'). In PuTTY the option is under Connection -> SSH -> X11 -> click on 'Enable X11 Forwarding'.
Make sure Xming is running in the background on your laptop and do the standard X test, 'xclock'. If you get a message like 'X connection to localhost:19.0 broken (explicit kill or server shutdown).' then Xming is most likely not running.
Also, make sure you're not explicitly setting your DISPLAY variable in any startup scripts; SSH will set up an alias (something like localhost:10 or in the example above localhost:19) for the X11 tunnel and automatically set DISPLAY to that value. Overwriting DISPLAY will obviously mean you will no longer be pointing to the correct X11 tunnel. The flip side of this is that other terminals that don't have SSH X11 Forwarding set can use the same DISPLAY value and take advantage of the tunnel.
I tend to prefer the PuTTY option but several of my coworkers use plink from within Xming.
A: putty + XMing - I had to set the DISPLAY environment variable manually to get things running (alongside with checking "Enable X11 forwarding" in putty - Connection/SSH/X11)
export DISPLAY=0:10.0
(it was set to "localhost:10.0", which did not work)
A: You have to add the Linux machine's DNS name(s) and IP address to the C:\Program Files\xming\X0.hosts file. File should contain:
LinuxBox.mydomain.com
LinuxBox
192.168.1.25
This is the right answer: https://www.slackwiki.com/X_Windows:_Remote_X_to_Windows_with_Xming
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: SQL Server 2005 and 2008 on same developer machine? Has anyone tried installing SQL Server 2008 Developer on a machine that already has 2005 Developer installed?
I am unsure if I should do this, and I need to keep 2005 on this machine for the foreseeable future in order to test our application easily. Since I sometimes need to take backup files of databases and make available for other people in the company I cannot just replace 2005 with 2008 as I suspect (but do not know) that the databases aren't 100% backwards compatible.
What kind of issues would arise? Do I need to install the new version with an instance name, will that work? Can I use a different port number to distinguish them?
I found this entry on technet: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3496209&SiteID=17
It doesn't say more than just yes you can do this and I kinda suspected that this was doable anyway, but I need to know if there are anything I need to know before I start installing.
Anyone?
A: If you have Visual Studio 2008 installed you will get a validation error and you cannot install SQL server 2008 until you install Visual Studio 2008 SP1. If you don't have Visual Studio 2008 installed it should not be a problem. So if you do have Visual Studio 2008 wait till August 11th since that is the day that Visual Studio 2008 SP1 will ship
A: Yes this is possible. You will have to create a named instance not used by another version of SQL Server as per the previous answer and version 3.5 of .Net installed. Works great!!
Here the list of prerequisites:
*
*.NET Framework 3.5 SP1
*Windows Installer 4.5
*Windows PowerShell 1.0
A: I believe that this is perfectly possible. I am currently running both SQL Server 2000 and SQL Server 2005 on my development server while I transfer applications over.
The only thing you will have to do is create a new instance which isn't already being used by SQL Server 2005.
As with anything new, there will probably be some bugs, however, it should generally "just work".
A: my experience is after having sql sever 2005 and 2008 on same machine SSIS 2005 does not work properly... specially with script task, data flow and sequence container
A: You could run just SQL 2008 as the single instance and then attach/create databases with compatability level of 2005? The problem with that is that its a theory. Im not 100% positive that if you create a database on 2008 , with a compatability level of 2005, and then detach it, that a SQL 2005 instance is capable of attaching it.
I think its a good enough chance to try though. But I agree with the previous answers, the multiple instance options will work fine.
A: Unfortunately, it seems SQL Server 2008 Client Tools requires Visual Studio 2008 SP1, and I'm loath to install a beta of this on my main development machine.
I'll wait until SP1 is RTM before I move on.
Edit: Yes, I do have Visual Studio 2008 on this machine, but I'd like to avoid beta installations of debugger applications. They tend to dig themselves too deep in for my taste.
A: I have try it with negativ result. The 2k8 installation breaks with a mysterious error-message. The installation-protocol looks fine, but it will not work. After this the 2k5 installation was buggy too.
The 2k8 installation was half-ready, so it´s already in controlpane / software, but uninstallation is not possible.
So my result - don´t do it on a productive server / workstation. If you need both versions, use a virtual machine instead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: HTTP: Generating ETag Header How do I generate an ETag HTTP header for a resource file?
A: From http://developer.yahoo.com/performance/rules.html#etags:
By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.
...
If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether.
A: How to generate the default apache etag in bash
for file in *; do printf "%x-%x-%x\t$file\n" `stat -c%i $file` `stat -c%s $file` $((`stat -c%Y $file`*1000000)) ; done
Even when i was looking for something exactly like the etag (the browser asks for a file only if it has changed on the server), it never worked and i ended using a GET trick (adding a timestamp as a get argument to the js files).
A: As long as it changes whenever the resource representation changes, how you produce it is completely up to you.
You should try to produce it in a way that additionally:
*
*doesn't require you to re-compute it on each conditional GET, and
*doesn't change if the resource content hasn't changed
Using hashes of content can cause you to fail at #1 if you don't store the computed hashes along with the files.
Using inode numbers can cause you to fail at #2 if you rearrange your filesystem or you serve content from multiple servers.
One mechanism that can work is to use something entirely content dependent such as a SHA-1 hash or a version string, computed and stored once whenever your resource content changes.
A: An etag is an arbitrary string that the server sends to the client that the client will send back to the server the next time the file is requested.
The etag should be computable on the server based on the file. Sort of like a checksum, but you might not want to checksum every file sending it out.
server client
<------------- request file foo
file foo etag: "xyz" -------->
<------------- request file foo
etag: "xyz" (what the server just sent)
(the etag is the same, so the server can send a 304)
I built up a string in the format "datestamp-file size-file inode number". So, if a file is changed on the server after it has been served out to the client, the newly regenerated etag won't match if the client re-requests it.
char *mketag(char *s, struct stat *sb)
{
sprintf(s, "%d-%d-%d", sb->st_mtime, sb->st_size, sb->st_ino);
return s;
}
A: Ive been using Adler-32 as an html link shortener. Im not sure whether this is a good idea, but so far, I havent noticed any duplicates. It may work as a etag generator. And it should be faster then trying to hash using an encryption scheme like sha, but I havent verified this. The code I use is:
shortlink = str(hex(zlib.adler32(link)+(2**32-1)/2))[2:-1]
A: I would recommend not using them and going for last-modified headers instead.
Askapache has a useful article on this. (as they do pretty much everything it seems!)
http://www.askapache.com/htaccess/apache-speed-etags.html
A: The code example of Mark Harrison is similar to what used in Apache 2.2. But such algorithm causes problems for load balancing when you have two servers with the same file but the file's inode is different. That's why in Apache 2.4 developers simplified ETag schema and removed the inode part. Also to make ETag shorter usually they encoded in hex:
<inttypes.h>
char *mketag(char *s, struct stat *sb)
{
sprintf(s, "\"%" PRIx64 "-%" PRIx64 "\"", sb->st_mtime, sb->st_size);
return s;
}
or for Java
etag = '"' + Long.toHexString(lastModified) + '-' +
Long.toHexString(contentLength) + '"';
for C#
// Generate ETag from file's size and last modification time as unix timestamp in seconds from 1970
public static string MakeEtag(long lastMod, long size)
{
string etag = '"' + lastMod.ToString("x") + '-' + size.ToString("x") + '"';
return etag;
}
public static void Main(string[] args)
{
long lastMod = 1578315296;
long size = 1047;
string etag = MakeEtag(lastMod, size);
Console.WriteLine("ETag: " + etag);
//=> ETag: "5e132e20-417"
}
The function returns ETag compatible with Nginx. See comparison of ETags form different servers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Simple MOLAP solution To analyze lots of text logs I did some hackery that looks like this:
*
*Locally import logs into Access
*Reprocess Cube link to previous mdb in Analisis Service 2000 (yes it is 2k)
*Use Excel to visualize Cube (it is not big - up to milions raw entries)
My hackery is a succes and more people are demanding an access to my Tool. As you see I see more automating and easier deployment.
Do you now some tools/libraries that would give me the same but with easier deployment?
Kind of embedded OLAP service?
Edit: I heard of Mondrian but we don't do much with Java. Have you seen something similiar done for .Net/Win32 ? Comercial is also OK.
A: You could also try the other free open source OLAP server, PALO from Jedox (www.palo.net)
A: I'm not completely familiar with the implications of Step 2 in your approach above, but if you're looking for a more robust OLAP solution, it might be worth your while to check out Mondrian, the open-source OLAP / Analysis services module of Pentaho.
A: I dont think that Mondrian is better than SSAS but I do know that its free and you independently distribute it. It uses XMLA and its cube definition XML file is almost the same as SSAS.
A: SQl 2K is fine. Stick the cube on a server everyone can see on the network, and tell em to use Excel to connect to it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Http Auth in a Firefox 3 bookmarklet I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account.
I tested it from the command line like:
wget -O - --no-check-certificate \
"https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test"
This works great.
I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with:
javascript:void(
open('https://seconduser:password@api.del.icio.us/v1/posts/add?url='
+encodeURIComponent(location.href)
+'&description='+encodeURIComponent(document.title),
'delicious','toolbar=no,width=500,height=250'
)
);
But all that happens is that I get this from del.icio.us:
<?xml version="1.0" standalone="yes"?>
<result code="access denied" />
<!-- fe04.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:02:54 PDT 2008 -->
If I then go to the address bar and press enter, it changes to:
<?xml version='1.0' standalone='yes'?>
<result code="done" />
<!-- fe02.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:07:45 PDT 2008 -->
Any ideas how to get it to work directly from the bookmarks?
A: Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all?
A: @travis Looks very nice! I will sure take a look into it. I can think of several places I can use that
I never got round to sniff the traffic but found out that a php site on my own server with http-auth worked fine, so i figured it was something with delicious. I then created a php page that does a wget of the delicious api and everything works fine :)
A: Does calling the method twice work?
Seems to me that your authentication is being approved after the content arrives, so then a second attempt now works because you have the correct cookies.
A: I'd recommend checking out the iMacros addon for Firefox. I use it to login to a local web server and after logging in, navigate directly to a certain page. The code I have looks like this, but it allows you to record your own macros:
VERSION BUILD=6000814 RECORDER=FX
TAB T=1
URL GOTO=http://10.20.2.4/login
TAG POS=1 TYPE=INPUT:TEXT FORM=NAME:introduce ATTR=NAME:initials CONTENT=username-goes-here
SET !ENCRYPTION NO
TAG POS=1 TYPE=INPUT:PASSWORD FORM=NAME:introduce ATTR=NAME:password CONTENT=password-goes-here
TAG POS=1 TYPE=INPUT:SUBMIT FORM=NAME:introduce ATTR=NAME:Submit&&VALUE:Go
URL GOTO=http://10.20.2.4/timecard
I middle click on it and it opens a new tab and runs the macro taking me directly to the page I want, logged in with the account I specified.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What was the tag used for? Does anyone remember the XMP tag?
What was it used for and why was it deprecated?
A: XMP and PRE differ. Content within PRE tags is formatted as follows:
*
*Content is shown with a fixed font,
*All whitespace is preserved, and
*Each line break begins a new line.
If you want to include special characters such as <, > and & within PRE tags, they must be escaped so that they are not subject to special interpretation by the browser.
In contrast, content within XMP tags does not need to be escaped.
The only character sequence that cannot be included within XMP tags is the XMP end tag (</XMP>).
XMP is still supported by the browsers I have tested. You can try it with xmp.html. View the source to see the tags.
A: <xmp> is used with strapdown.js in formatting markdown notation. The name strapdown combining the terms bootstrap and markdown.
<!DOCTYPE html>
<html>
<title>Example</title>
<xmp theme="united">
## Example
- note one
- note two
- note three
</xmp>
<script src="http://strapdownjs.com/v/0.2/strapdown.js"></script>
</html>
A: I still use the xmp tag for debugging var_dump(); in PHP. I just can't remember to use the pre tag for some reason.
I think it doesn't really matter because if you really want to output text, you should use textarea with the readonly attribute.
A: A quick Google search on W3C reveals that XMP was introduced for displaying preformatted text in HTML 3.2 and earlier. When W3C deprecated the XMP tag, it suggested using the PRE tag as a preferred alternative.
Update: http://www.w3.org/TR/REC-html32#xmp, http://www.w3.org/MarkUp/html-spec/html-spec_5.html#SEC5.5.2.1
A: See http://www.w3.org/Bugs/Public/show_bug.cgi?id=12235
For HTML5. it was, according to the HTML5 editor (comments 11 and 12), a very close call either way.
A: I used <textarea>, which puts the html code into a neat box and clearly defines the code as different from the text before or after.
<textarea><b>boldtext</b><textarea>
A: XMP does some things that PRE does not support. I still depend on XMP, there is no substitute.
A: Still works to show raw html - if you use it in script, break the start tag.
var stuff='<xmp'+'>this is shown as is<br/>hello</xmp>';
document.getElementById("x").innerHTML=stuff;
<div id="x"></div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
} |
Q: DataTable Loop Performance Comparison Which of the following has the best performance?
I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1.
The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c.
Method 1
for (int i = 0; i < DataTable.Rows.Count; i++) {
// Do Something
}
Method 2
for (int i = 0, c = DataTable.Rows.Count; i < c; i++) {
// Do Something
}
A: No, it can't do that since there is no way to express constant over time for a value.
If the compiler should be able to do that, there would have to be a guarantee from the code returning the value that the value is constant, and for the duration of the loop won't change.
But, in this case, you're free to add new rows to the data table as part of your loop, and thus it's up to you to make that guarantee, in the way you have done it.
So in short, the compiler will not do that optimization if the end-index is anything other than a variable.
In the case of a variable, where the compiler can just look at the loop-code and see that this particular variable is not changed, it might do that and load the value into a register before starting the loop, but any performance gain from this would most likely be negligible, unless your loop body is empty.
Conclusion: If you know, or is willing to accept, that the end loop index is constant for the duration of the loop, place it into a variable.
Edit: Re-read your post, and yes, you might see negligible performance gains for your two cases as well, because the JITter optimizes the code. The JITter might optimize your end-index read into a direct access to the variable inside the data table that contains the row count, and a memory read isn't all that expensive anyway. If, on the other hand, reading that property was a very expensive operation, you'd see a more noticable difference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to setup Quality of Service? I'm talking about http://en.wikipedia.org/wiki/Quality_of_service. With streaming stackoverflow podcasts and downloading the lastest updates to ubuntu, I would like to have QoS working so I can use stackoverflow without my http connections timing out or taking forever.
I'm using an iConnect 624 ADSL modem which has QoS built-in but I can't seem to get it to work. Is it even possible to control the downstream (ie. from ISP to your modem)?
A: I don't know if this will help you, but I've never been a fan of using the ISP provided box directly. Personally I use a Linksys wrt54gl, with DD-wrt, behind(DMZ) my ISP provided box.
DD-wrt has excellent QoS management.
Sorry I can't be more help with your existing hardware.
A: You just need the tc command to handle the QoS on Linux boxen. However I wouldn't expect that much from it because of the results I obtained and detailed here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Video Compression: What is discrete cosine transform? I've implemented an image/video transformation technique called discrete cosine transform. This technique is used in MPEG video encoding. I based my algorithm on the ideas presented at the following URL:
http://vsr.informatik.tu-chemnitz.de/~jan/MPEG/HTML/mpeg_tech.html
Now I can transform an 8x8 section of a black and white image, such as:
0140 0124 0124 0132 0130 0139 0102 0088
0140 0123 0126 0132 0134 0134 0088 0117
0143 0126 0126 0133 0134 0138 0081 0082
0148 0126 0128 0136 0137 0134 0079 0130
0147 0128 0126 0137 0138 0145 0132 0144
0147 0131 0123 0138 0137 0140 0145 0137
0142 0135 0122 0137 0140 0138 0143 0112
0140 0138 0125 0137 0140 0140 0148 0143
Into this an image with all the important information at the top right. The transformed block looks like this:
1041 0039 -023 0044 0027 0000 0021 -019
-050 0044 -029 0000 0009 -014 0032 -010
0000 0000 0000 0000 -018 0010 -017 0000
0014 -019 0010 0000 0000 0016 -012 0000
0010 -010 0000 0000 0000 0000 0000 0000
-016 0021 -014 0010 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 -010 0013 -014 0010 0000 0000
Now, I need to know how can I take advantage of this transformation? I'd like to detect other 8x8 blocks in the same image ( or another image ) that represent a good match.
Also, What does this transformation give me? Why is the information stored in the top right of the converted image important?
A: I learned everything I know about the DCT from The Data Compression Book. In addition to being a great introduction to the field of data compression, it has a chapter near the end on lossy image compression which introduces JPEG and the DCT.
A: The concepts underlying these kinds of transformations are more easily seen by first looking at a one dimensional case. The image here shows a square wave along with several of the first terms of an infinite series. Looking at it, note that if the functions for the terms are added together, they begin to approximate the shape of the square wave. The more terms you add up, the better the approximation. But, to get from an approximation to the exact signal, you have to sum an infinite number of terms. The reason for this is that the square wave is discontinuous. If you think of a square wave as a function of time, it goes from -1 to 1 in zero time. To represent such a thing requires an infinite series. Take another look at the plot of the series terms. The first is red, the second yellow. Successive terms have more "up and down" transitions. These are from the increasing frequency of each term. Sticking with the square wave as a function of time, and each series term a function of frequency there are two equivalent representations: a function of time and a function of frequency (1/time).
In the real world, there are no square waves. Nothing happens in zero time. Audio signals, for example occupy the range 20Hz to 20KHz, where Hz is 1/time. Such things can be represented with finite series'.
For images, the mathematics are the same, but two things are different. First, it's two dimensional. Second the notion of time makes no sense. In the 1D sense, the square wave is merely a function that gives some numerical value for for an argument that we said was time. An (static) image is a function that gives a numerical value for for every pair of row, column indeces. In other words, the image is a function of a 2D space, that being a rectangular region. A function like that can be represented in terms of its spatial frequency. To understand what spatial frequency is, consider an 8 bit grey level image and a pair of adjacent pixels. The most abrupt transistion that can occur in the image is going from 0 (say black) to 255 (say white) over the distance of 1 pixel. This corresponds directly with the highest frequency (last) term of a series representation.
A two dimensional Fourier (or Cosine) transformation of the image results in an array of values the same size as the image, representing the same information not as a function of space, but a function of 1/space. The information is ordered from lowest to highest frequency along the diagonal from the origin highest row and column indeces. An example is here.
For image compression, you can transform an image, discard some number of higher frequency terms and inverse transform the remaining ones back to an image, which has less detail than the original. Although it transforms back to an image of the same size (with the removed terms replaced by zero), in the frequency domain, it occupies less space.
Another way to look at it is reducing an image to a smaller size. If, for example you try to reduce the size of an image by throwing away three of every four pixels in a row, and three of every four rows, you'll have an array 1/4 the size but the image will look terrible. In most cases, this is accomplished with a 2D interpolator, which produces new pixels by averaging rectangular groups of the larger image's pixels. In so doing, the interpolation has an effect similar throwing away series terms in the frequency domain, only it's much faster to compute.
To do more things, I'm going to refer to a Fourier transformation as an example. Any good discussion of the topic will illustrate how the Fourier and Cosine transformation are related. The Fourier transformation of an image can't be viewed directly as such, because it's made of complex numbers. It's already segregated into two kinds of information, the Real and Imaginary parts of the numbers. Typically, you'll see images or plots of these. But it's more meaningful (usually) to separate the complex numbers into their magnitude and phase angle. This is simply taking a complex number on the complex plane and switching to polar coordinates.
For the audio signal, think of the combined sin and cosine functions taking an attitional quantity in their arguments to shift the function back and forth (as a part of the signal representation). For an image, the phase information describes how each term of the series is shifted with respect to the other terms in frequency space. In images, edges are (hopefully) so distinct that they are well characterized by lowest frequency terms in the frequency domain. This happens not because they are abrupt transitions, but because they have e.g. a lot of black area adjacent to a lot of lighter area. Consider a one dimensional slice of an edge. The grey level is zero then transitions up and stays there. Visualize the sine wave that woud be the first approximation term where it crosses the signal transition's midpoint at sin(0). The phase angle of this term corresponds to a displacement in the image space. A great illustraion of this is available here. If you are trying to find shapes and can make a reference shape, this is one way to recognize them.
A: The result of a DCT is a transformation of the original source into the frequency domain. The top left entry stores the "amplitude" the "base" frequency and frequency increases both along the horizontal and vertical axes. The outcome of the DCT is usually a collection of amplitudes at the more usual lower frequencies (the top left quadrant) and less entries at the higher frequencies. As lassevk mentioned, it is usual to just zero out these higher frequencies as they typically constitute very minor parts of the source. However, this does result in loss of information. To complete the compression it is usual to use a lossless compression over the DCT'd source. This is where the compression comes in as all those runs of zeros get packed down to almost nothing.
One possible advantage of using the DCT to find similar regions is that you can do a first pass match on low frequency values (top-left corner). This reduces the number of values you need to match against. If you find matches of low frequency values, you can increase into comparing the higher frequencies.
Hope this helps
A: If I remember correctly, this matrix allows you to save the data to a file with compression.
If you read further down, you'll find the zig-zag pattern of data to read from that final matrix. The most important data are in the top left corner, and least important in the bottom right corner. As such, if you stop writing at some point and just consider the rest as 0's, even though they aren't, you'll get a lossy approximation of the image.
The number of values you throw away increases compression at the cost of image fidelity.
But I'm sure someone else can give you a better explanation.
A: I'd recommend picking up a copy of Digital Video Compression - it's a really good overview of compression algorithms for images and video.
A: Anthony Cramp's answer looked good to me. As he mentions the DCT transforms the data into the frequency domain. The DCT is heavily used in video compression as the human visual system is must less sensitive to high frequency changes, therefore zeroing out the higher frequency values results in a smaller file, with little effect on a human's perception of the video quality.
In terms of using the DCT to compare images, I guess the only real benefit is if you cut away the higher frequency data and therefore have a smaller set of data to search/match. Something like Harr wavelets may give better image matching results.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How can I create Prototype Methods (like JavaScript) in C#.Net? How is it possible to make prototype methods in C#.Net?
In JavaScript, I can do the following to create a trim method for the string object:
String.prototype.trim = function() {
return this.replace(/^\s+|\s+$/g,"");
}
How can I go about doing this in C#.Net?
A: It sounds like you're talking about C#'s Extension Methods. You add functionality to existing classes by inserting the "this" keyword before the first parameter. The method has to be a static method in a static class. Strings in .NET already have a "Trim" method, so I'll use another example.
public static class MyStringEtensions
{
public static bool ContainsMabster(this string s)
{
return s.Contains("Mabster");
}
}
So now every string has a tremendously useful ContainsMabster method, which I can use like this:
if ("Why hello there, Mabster!".ContainsMabster()) { /* ... */ }
Note that you can also add extension methods to interfaces (eg IList), which means that any class implementing that interface will also pick up that new method.
Any extra parameters you declare in the extension method (after the first "this" parameter) are treated as normal parameters.
A: You can't dynamically add methods to existing objects or classes in .NET, except by changing the source for that class.
You can, however, in C# 3.0, use extension methods, which look like new methods, but are compile-time magic.
To do this for your code:
public static class StringExtensions
{
public static String trim(this String s)
{
return s.Trim();
}
}
To use it:
String s = " Test ";
s = s.trim();
This looks like a new method, but will compile the exact same way as this code:
String s = " Test ";
s = StringExtensions.trim(s);
What exactly are you trying to accomplish? Perhaps there are better ways of doing what you want?
A: You need to create an extension method, which requires .NET 3.5. The method needs to be static, in a static class. The first parameter of the method needs to be prefixed with "this" in the signature.
public static string MyMethod(this string input)
{
// do things
}
You can then call it like
"asdfas".MyMethod();
A: Using the 3.5 compiler you can use an Extension Method:
public static void Trim(this string s)
{
// implementation
}
You can use this on a CLR 2.0 targeted project (3.5 compiler) by including this hack:
namespace System.Runtime.CompilerServices
{
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class | AttributeTargets.Assembly)]
public sealed class ExtensionAttribute : Attribute
{
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: CSharpCodeProvider Compilation Performance Is CompileAssemblyFromDom faster than CompileAssemblyFromSource?
It should be as it presumably bypasses the compiler front-end.
A: CompileAssemblyFromDom compiles to a .cs file which is then run through the normal C# compiler.
Example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.CSharp;
using System.CodeDom;
using System.IO;
using System.CodeDom.Compiler;
using System.Reflection;
namespace CodeDomQuestion
{
class Program
{
private static void Main(string[] args)
{
Program p = new Program();
p.dotest("C:\\fs.exe");
}
public void dotest(string outputname)
{
CSharpCodeProvider cscProvider = new CSharpCodeProvider();
CompilerParameters cp = new CompilerParameters();
cp.MainClass = null;
cp.GenerateExecutable = true;
cp.OutputAssembly = outputname;
CodeNamespace ns = new CodeNamespace("StackOverflowd");
CodeTypeDeclaration type = new CodeTypeDeclaration();
type.IsClass = true;
type.Name = "MainClass";
type.TypeAttributes = TypeAttributes.Public;
ns.Types.Add(type);
CodeMemberMethod cmm = new CodeMemberMethod();
cmm.Attributes = MemberAttributes.Static;
cmm.Name = "Main";
cmm.Statements.Add(new CodeSnippetExpression("System.Console.WriteLine('f'zxcvv)"));
type.Members.Add(cmm);
CodeCompileUnit ccu = new CodeCompileUnit();
ccu.Namespaces.Add(ns);
CompilerResults results = cscProvider.CompileAssemblyFromDom(cp, ccu);
foreach (CompilerError err in results.Errors)
Console.WriteLine(err.ErrorText + " - " + err.FileName + ":" + err.Line);
Console.WriteLine();
}
}
}
which shows errors in a (now nonexistent) temp file:
) expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17
; expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17
Invalid expression term ')' - c:\Documents and Settings\jacob\Local Settings\Tem p\x59n9yb-.0.cs:17
So I guess the answer is "no"
A: I've tried finding the ultimate compiler call earlier and I gave up. There's quite a number of layers of interfaces and virtual classes for my patience.
I don't think the source reader part of the compiler ends up with a DOM tree, but intuitively I would agree with you. The work necessary to transform the DOM to IL should be much less than reading C# source code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: What is a good Mercurial usage pattern for this setup? We've got two developers on the same closed (ugh, stupid gov) network, Another developer a couple minutes drive down the road, and a fourth developer half-way across the country. E-Mail, ftp, and removal media are all possible methods of transfer for the people not on the same network.
I am one of the two closed network developers, consider us the "master" location.
What is the best Mercurial setup/pattern for group? What is the best way to trasmit changes to/from the remote developers? As I am in charge, I figured that I would have to keep at least one master repo with another local repo in which I can develop. Each other person should just need a clone of the master. Is this right? I guess this also makes me responsible for the merging?
As you can see, I'm still trying to wrap my head around distributed version control. I don't think there is any other way to do this with the connectivity situation.
A: Patches are a simple and versatile solution.
For moving around larger groups of changes (especially binary changes and merges), Mercurial offers binary bundles. A bundle is basically the binary stuff that is sent on the network when you do hg push, but here it is captured in a file.
Let's imagine I have gotten a clone somehow (by flash drive, DVD, etc.). Call it upstream. I then make a second clone, call it devel. I do all my development in devel and make lots of commits, merges, etc. Since Mercurial is distributed I can do all this offline.
To see which changesets are missing in upstream I do
% hg outgoing ../upstream
When I have something to send, I can use
% hg bundle changes.hg ../upstream
to get a binary compressed file which contain the changesets including all their meta data. I can then burn this file on a CD and send it by mail...
The recipient of the bundle can do
% hg incoming changes.hg
to see the changeset list and
% hg pull changes.hg
to unpack and add the changesets to his repository. He will then most likely have to merge -- this is exactly as if he had pulled directly from your repository over HTTP or SSH.
Note, the upstream repository is only used as a convenient way to remember which changesets are already found in the upstream repository. You can also just jot down the changeset ID and use hg bundle --base when bundling to specify the base (common) changeset. See hg help bundle or look in the wiki.
A: The users outside the network can make patches, and/or use email to send the updates to the main repo or someone, like yourself to merge them. The other internal people can have local copies, like yourself and do merges --but if you are having these out of network patches, it might be better that one person deal with them so nobody gets confused, but that's something you'd have to consider yourself.
Syncing the other way, you'd create a patch, and them email or get a flash drive to the remote developers to patch their system. You're going to need some good communication in the team man, I am thankful I'm not in your shoes.
Those are my only suggestions --well, the obvious, get them a VPN connection! I'd love to hear how it goes, what plans stabilize into a weekly groove, et cetera.
A: Correct. The only way anything makes it onto the closed network is via flash drive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: SQL Case Expression Syntax? What is the complete and correct syntax for the SQL Case expression?
A: Sybase has the same case syntax as SQL Server:
Description
Supports conditional SQL expressions; can be used anywhere a value expression can be used.
Syntax
case
when search_condition then expression
[when search_condition then expression]...
[else expression]
end
Case and values syntax
case expression
when expression then expression
[when expression then expression]...
[else expression]
end
Parameters
case
begins the case expression.
when
precedes the search condition or the expression to be compared.
search_condition
is used to set conditions for the results that are selected. Search conditions for case expressions are similar to the search conditions in a where clause. Search conditions are detailed in the Transact-SQL User’s Guide.
then
precedes the expression that specifies a result value of case.
expression
is a column name, a constant, a function, a subquery, or any combination of column names, constants, and functions connected by arithmetic or bitwise operators. For more information about expressions, see “Expressions” in.
Example
select disaster,
case
when disaster = "earthquake"
then "stand in doorway"
when disaster = "nuclear apocalypse"
then "hide in basement"
when monster = "zombie apocalypse"
then "hide with Chuck Norris"
else
then "ask mom"
end
from endoftheworld
A: The complete syntax depends on the database engine you're working with:
For SQL Server:
CASE case-expression
WHEN when-expression-1 THEN value-1
[ WHEN when-expression-n THEN value-n ... ]
[ ELSE else-value ]
END
or:
CASE
WHEN boolean-when-expression-1 THEN value-1
[ WHEN boolean-when-expression-n THEN value-n ... ]
[ ELSE else-value ]
END
expressions, etc:
case-expression - something that produces a value
when-expression-x - something that is compared against the case-expression
value-1 - the result of the CASE statement if:
the when-expression == case-expression
OR the boolean-when-expression == TRUE
boolean-when-exp.. - something that produces a TRUE/FALSE answer
Link: CASE (Transact-SQL)
Also note that the ordering of the WHEN statements is important. You can easily write multiple WHEN clauses that overlap, and the first one that matches is used.
Note: If no ELSE clause is specified, and no matching WHEN-condition is found, the value of the CASE expression will be NULL.
A: I dug up the Oracle page for the same and it looks like this is the same syntax, just described slightly different.
Link: Oracle/PLSQL: Case Statement
A: Oracle syntax from the 11g Documentation:
CASE { simple_case_expression | searched_case_expression }
[ else_clause ]
END
simple_case_expression
expr { WHEN comparison_expr THEN return_expr }...
searched_case_expression
{ WHEN condition THEN return_expr }...
else_clause
ELSE else_expr
A: Considering you tagged multiple products, I'd say the full correct syntax would be the one found in the ISO/ANSI SQL-92 standard:
<case expression> ::=
<case abbreviation>
| <case specification>
<case abbreviation> ::=
NULLIF <left paren> <value expression> <comma>
<value expression> <right paren>
| COALESCE <left paren> <value expression>
{ <comma> <value expression> }... <right paren>
<case specification> ::=
<simple case>
| <searched case>
<simple case> ::=
CASE <case operand>
<simple when clause>...
[ <else clause> ]
END
<searched case> ::=
CASE
<searched when clause>...
[ <else clause> ]
END
<simple when clause> ::= WHEN <when operand> THEN <result>
<searched when clause> ::= WHEN <search condition> THEN <result>
<else clause> ::= ELSE <result>
<case operand> ::= <value expression>
<when operand> ::= <value expression>
<result> ::= <result expression> | NULL
<result expression> ::= <value expression>
Syntax Rules
1) NULLIF (V1, V2) is equivalent to the following <case specification>:
CASE WHEN V1=V2 THEN NULL ELSE V1 END
2) COALESCE (V1, V2) is equivalent to the following <case specification>:
CASE WHEN V1 IS NOT NULL THEN V1 ELSE V2 END
3) COALESCE (V1, V2, . . . ,n ), for n >= 3, is equivalent to the
following <case specification>:
CASE WHEN V1 IS NOT NULL THEN V1 ELSE COALESCE (V2, . . . ,n )
END
4) If a <case specification> specifies a <simple case>, then let CO
be the <case operand>:
a) The data type of each <when operand> WO shall be comparable
with the data type of the <case operand>.
b) The <case specification> is equivalent to a <searched case>
in which each <searched when clause> specifies a <search
condition> of the form "CO=WO".
5) At least one <result> in a <case specification> shall specify a
<result expression>.
6) If an <else clause> is not specified, then ELSE NULL is im-
plicit.
7) The data type of a <case specification> is determined by ap-
plying Subclause 9.3, "Set operation result data types", to the
data types of all <result expression>s in the <case specifica-
tion>.
Access Rules
None.
General Rules
1) Case:
a) If a <result> specifies NULL, then its value is the null
value.
b) If a <result> specifies a <value expression>, then its value
is the value of that <value expression>.
2) Case:
a) If the <search condition> of some <searched when clause> in
a <case specification> is true, then the value of the <case
specification> is the value of the <result> of the first
(leftmost) <searched when clause> whose <search condition> is
true, cast as the data type of the <case specification>.
b) If no <search condition> in a <case specification> is true,
then the value of the <case expression> is the value of the
<result> of the explicit or implicit <else clause>, cast as
the data type of the <case specification>.
A: Here are the CASE statement examples from the PostgreSQL docs (Postgres follows the SQL standard here):
SELECT a,
CASE WHEN a=1 THEN 'one'
WHEN a=2 THEN 'two'
ELSE 'other'
END
FROM test;
or
SELECT a,
CASE a WHEN 1 THEN 'one'
WHEN 2 THEN 'two'
ELSE 'other'
END
FROM test;
Obviously the second form is cleaner when you are just checking one field against a list of possible values. The first form allows more complicated expressions.
A: Case statement syntax in SQL SERVER:
CASE column
WHEN value1 THEN 1
WHEN value3 THEN 2
WHEN value3 THEN 3
WHEN value1 THEN 4
ELSE ''
END
And we can use like below also:
CASE
WHEN column=value1 THEN 1
WHEN column=value3 THEN 2
WHEN column=value3 THEN 3
WHEN column=value1 THEN 4
ELSE ''
END
A: Here you can find a complete guide for MySQL case statements in SQL.
CASE
WHEN some_condition THEN return_some_value
ELSE return_some_other_value
END
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: Upgrade to ASP.NET 3.x I am currently aware that ASP.NET 2.0 is out and about and that there are 3.x versions of the .Net Framework.
Is it possible to upgrade my ASP.NET web server to version 3.x of the .Net Framework?
I have tried this, however, when selecting which version of the .Net framwork to use in IIS (the ASP.NET Tab), only version 1.1 and 2.0 show.
Is there a work around?
A:
if I install 3.5 and have IIS setup to use 2.0. I will be able to use 3.5 features?
Yes, that is correct. You have IIS set to 2.0 for both 2.0 and 3.5 sites, as they both run on the same CLR. 3.5 uses a different compile method than 2.0. This is declared in the web.config for the site. See this post for more details on this. But the setup in IIS for both 3.5 and 2.0 ASP.net sites is identical.
A: Unfortunately, the statement .NET versions can be installed side-by-side, so it won't disrupt any "legacy" apps isn't entirely true. If you install 3.5, it requires 2.0 SP1, which can disrupt legacy applications that uses 2.0 and connects to Oracle database servers.
A: Sure, download the 3.5 redistributable, install it on the servre, and you're good to go. .NET versions can be installed side-by-side, so it won't disrupt any "legacy" apps.
http://www.microsoft.com/downloads/details.aspx?FamilyId=333325FD-AE52-4E35-B531-508D977D32A6&displaylang=en
A: GateKiller,
.NET 3.0 and .NET 3.5 did not change the version of the CLR, so "using ASP.NET 3.5" is a more complicated thing that it sounds like it should be at first. In essence, you're still running on the 2.0 CLR, but you're using the C# 3.0 compiler and linking against the 3.5 libraries. It means adding a bunch of stuff to your Web.config file to become an ASP.NET 3.5 project.
Scott Hanselman has an awesome blog post covering the details:
http://www.hanselman.com/blog/HowToSetAnIISApplicationOrAppPoolToUseASPNET35RatherThan20.aspx
A: The version you are selecting in IIS is the version of the CLR to use. There are only two versions of the CLR. The .NET Framework 3.5 runs on CLR 2.0
A: The new framework is .Net 3.5, you'll have a new assembly System.Core, + a few more if you use features like Linq
.Net 3.5 comes with the new C#3.0 compiler
ASP.Net is still version 2.0
Lovely and confusing isn't it ;-)
You should upgrade the .Net framework on the server to .Net 3.5 SP1, but you're still going to be running ASP.Net 2.0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How can I evaluate C# code dynamically? I can do an eval("something()"); to execute the code dynamically in JavaScript. Is there a way for me to do the same thing in C#?
An example of what I am trying to do is: I have an integer variable (say i) and I have multiple properties by the names: "Property1", "Property2", "Property3", etc.
Now, I want to perform some operations on the " Propertyi " property depending on the value of i.
This is really simple with Javascript. Is there any way to do this with C#?
A: All of that would definitely work. Personally, for that particular problem, I would probably take a little different approach. Maybe something like this:
class MyClass {
public Point point1, point2, point3;
private Point[] points;
public MyClass() {
//...
this.points = new Point[] {point1, point2, point3};
}
public void DoSomethingWith(int i) {
Point target = this.points[i+1];
// do stuff to target
}
}
When using patterns like this, you have to be careful that your data is stored by reference and not by value. In other words, don't do this with primitives. You have to use their big bloated class counterparts.
I realized that's not exactly the question, but the question has been pretty well answered and I thought maybe an alternative approach might help.
A: Using the Roslyn scripting API (more samples here):
// add NuGet package 'Microsoft.CodeAnalysis.Scripting'
using Microsoft.CodeAnalysis.CSharp.Scripting;
await CSharpScript.EvaluateAsync("System.Math.Pow(2, 4)") // returns 16
You can also run any piece of code:
var script = await CSharpScript.RunAsync(@"
class MyClass
{
public void Print() => System.Console.WriteLine(1);
}")
And reference the code that was generated in previous runs:
await script.ContinueWithAsync("new MyClass().Print();");
A: DISCLAIMER: This answer was written back in 2008. The landscape has changed drastically since then.
Look at the other answers on this page, especially the one detailing Microsoft.CodeAnalysis.CSharp.Scripting.
Rest of answer will be left as it was originally posted but is no longer accurate.
Unfortunately, C# isn't a dynamic language like that.
What you can do, however, is to create a C# source code file, full with class and everything, and run it through the CodeDom provider for C# and compile it into an assembly, and then execute it.
This forum post on MSDN contains an answer with some example code down the page somewhat:
create a anonymous method from a string?
I would hardly say this is a very good solution, but it is possible anyway.
What kind of code are you going to expect in that string? If it is a minor subset of valid code, for instance just math expressions, it might be that other alternatives exists.
Edit: Well, that teaches me to read the questions thoroughly first. Yes, reflection would be able to give you some help here.
If you split the string by the ; first, to get individual properties, you can use the following code to get a PropertyInfo object for a particular property for a class, and then use that object to manipulate a particular object.
String propName = "Text";
PropertyInfo pi = someObject.GetType().GetProperty(propName);
pi.SetValue(someObject, "New Value", new Object[0]);
Link: PropertyInfo.SetValue Method
A: I don't now if you absolutely want to execute C# statements, but you can already execute Javascript statements in C# 2.0. The open-source library Jint is able to do it. It's a Javascript interpreter for .NET. Pass a Javascript program and it will run inside your application. You can even pass C# object as arguments and do automation on it.
Also if you just want to evaluate expression on your properties, give a try to NCalc.
A: You can use reflection to get the property and invoke it. Something like this:
object result = theObject.GetType().GetProperty("Property" + i).GetValue(theObject, null);
That is, assuming the object that has the property is called "theObject" :)
A: Not really. You can use reflection to achieve what you want, but it won't be nearly as simple as in Javascript. For example, if you wanted to set the private field of an object to something, you could use this function:
protected static void SetField(object o, string fieldName, object value)
{
FieldInfo field = o.GetType().GetField(fieldName, BindingFlags.Instance | BindingFlags.NonPublic);
field.SetValue(o, value);
}
A: This is an eval function under c#. I used it to convert anonymous functions (Lambda Expressions) from a string.
Source: http://www.codeproject.com/KB/cs/evalcscode.aspx
public static object Eval(string sCSCode) {
CSharpCodeProvider c = new CSharpCodeProvider();
ICodeCompiler icc = c.CreateCompiler();
CompilerParameters cp = new CompilerParameters();
cp.ReferencedAssemblies.Add("system.dll");
cp.ReferencedAssemblies.Add("system.xml.dll");
cp.ReferencedAssemblies.Add("system.data.dll");
cp.ReferencedAssemblies.Add("system.windows.forms.dll");
cp.ReferencedAssemblies.Add("system.drawing.dll");
cp.CompilerOptions = "/t:library";
cp.GenerateInMemory = true;
StringBuilder sb = new StringBuilder("");
sb.Append("using System;\n" );
sb.Append("using System.Xml;\n");
sb.Append("using System.Data;\n");
sb.Append("using System.Data.SqlClient;\n");
sb.Append("using System.Windows.Forms;\n");
sb.Append("using System.Drawing;\n");
sb.Append("namespace CSCodeEvaler{ \n");
sb.Append("public class CSCodeEvaler{ \n");
sb.Append("public object EvalCode(){\n");
sb.Append("return "+sCSCode+"; \n");
sb.Append("} \n");
sb.Append("} \n");
sb.Append("}\n");
CompilerResults cr = icc.CompileAssemblyFromSource(cp, sb.ToString());
if( cr.Errors.Count > 0 ){
MessageBox.Show("ERROR: " + cr.Errors[0].ErrorText,
"Error evaluating cs code", MessageBoxButtons.OK,
MessageBoxIcon.Error );
return null;
}
System.Reflection.Assembly a = cr.CompiledAssembly;
object o = a.CreateInstance("CSCodeEvaler.CSCodeEvaler");
Type t = o.GetType();
MethodInfo mi = t.GetMethod("EvalCode");
object s = mi.Invoke(o, null);
return s;
}
A: I have written an open source project, Dynamic Expresso, that can convert text expression written using a C# syntax into delegates (or expression tree). Expressions are parsed and transformed into Expression Trees without using compilation or reflection.
You can write something like:
var interpreter = new Interpreter();
var result = interpreter.Eval("8 / 2 + 2");
or
var interpreter = new Interpreter()
.SetVariable("service", new ServiceExample());
string expression = "x > 4 ? service.SomeMethod() : service.AnotherMethod()";
Lambda parsedExpression = interpreter.Parse(expression,
new Parameter("x", typeof(int)));
parsedExpression.Invoke(5);
My work is based on Scott Gu article http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx .
A: You also could implement a Webbrowser, then load a html-file wich contains javascript.
Then u go for the document.InvokeScript Method on this browser. The return Value of the eval function can be catched and converted into everything you need.
I did this in several Projects and it works perfectly.
Hope it helps
A: Uses reflection to parse and evaluate a data-binding expression against an object at run time.
DataBinder.Eval Method
A: I have written a package, SharpByte.Dynamic, to simplify the task of compiling and executing code dynamically. The code can be invoked on any context object using extension methods as detailed further here.
For example,
someObject.Evaluate<int>("6 / {{{0}}}", 3))
returns 3;
someObject.Evaluate("this.ToString()"))
returns the context object's string representation;
someObject.Execute(@
"Console.WriteLine(""Hello, world!"");
Console.WriteLine(""This demonstrates running a simple script"");
");
runs those statements as a script, etc.
Executables can be gotten easily using a factory method, as seen in the example here--all you need is the source code and list of any expected named parameters (tokens are embedded using triple-bracket notation, such as {{{0}}}, to avoid collisions with string.Format() as well as Handlebars-like syntaxes):
IExecutable executable = ExecutableFactory.Default.GetExecutable(executableType, sourceCode, parameterNames, addedNamespaces);
Each executable object (script or expression) is thread-safe, can be stored and reused, supports logging from within a script, stores timing information and last exception if encountered, etc. There is also a Copy() method compiled on each to allow creating cheap copies, i.e. using an executable object compiled from a script or expression as a template for creating others.
Overhead of executing an already-compiled script or statement is relatively low, at well under a microsecond on modest hardware, and already-compiled scripts and expressions are cached for reuse.
A: You could do it with a prototype function:
void something(int i, string P1) {
something(i, P1, String.Empty);
}
void something(int i, string P1, string P2) {
something(i, P1, P2, String.Empty);
}
void something(int i, string P1, string P2, string P3) {
something(i, P1, P2, P3, String.Empty);
}
and so on...
A: I was trying to get a value of a structure (class) member by it's name. The structure was not dynamic. All answers didn't work until I finally got it:
public static object GetPropertyValue(object instance, string memberName)
{
return instance.GetType().GetField(memberName).GetValue(instance);
}
This method will return the value of the member by it's name. It works on regular structure (class).
A: You might check the Heleonix.Reflection library. It provides methods to get/set/invoke members dynamically, including nested members, or if a member is clearly defined, you can create a getter/setter (lambda compiled into a delegate) which is faster than reflection:
var success = Reflector.Set(instance, null, $"Property{i}", value);
Or if number of properties is not endless, you can generate setters and chache them (setters are faster since they are compiled delegates):
var setter = Reflector.CreateSetter<object, object>($"Property{i}", typeof(type which contains "Property"+i));
setter(instance, value);
Setters can be of type Action<object, object> but instances can be different at runtime, so you can create lists of setters.
A: the correct answer is you need to cache all the result to keep the mem0ry usage low.
an example would look like this
TypeOf(Evaluate)
{
"1+1":2;
"1+2":3;
"1+3":5;
....
"2-5":-3;
"0+0":1
}
and add it to a List
List<string> results = new List<string>();
for() results.Add(result);
save the id and use it in the code
hope this helps
A: Unfortunately, C# doesn't have any native facilities for doing exactly what you are asking.
However, my C# eval program does allow for evaluating C# code. It provides for evaluating C# code at runtime and supports many C# statements. In fact, this code is usable within any .NET project, however, it is limited to using C# syntax. Have a look at my website, http://csharp-eval.com, for additional details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "107"
} |
Q: How can I Java webstart multiple, dependent, native libraries? Example: I have two shared objects (same should apply to .dlls). The first shared object is from a third-party library, we'll call it libA.so. I have wrapped some of this with JNI and created my own library, libB.so. Now libB depends on libA.
When webstarting, both libraries are places in some webstart working area. My java code attempts to load libB. At this point the system loader will attempt to load libA which is not in the system library path (java.library.path won't help this). The end result is that libB has an unsatisfied link and cannot be used.
I have tried loading libA before libB, but that still does not work. Seems the OS wants to do that loading for me. Is there any way I can make this work other than statically compiling?
A: I'm not sure if this would be handled exactly the same way for webstart, but we ran into this situation in a desktop application when dealing with a set of native libraries (dlls in our case).
Loading libA before libB should work, unless one of those libraries has a dependency that is unaccounted for and not in the path. My understanding is that once it gets to a system loadLibrary call (i.e. Java has found the library in its java.library.path and is now telling the OS to load it) - it is completely dependent on the operating system to find any dependent libraries, because at that point it is the operating system that is loading the library for the process, and the OS only knows how to look in the system path. That seems hard to set in the case of a Webstart app, but there is a way around this that does not involve static compiling. You may be able to shuffle where your libraries are - I am unsure
If you use a custom classloader, you can override loadLibrary and findLibrary so that it can locate your libraries from within a jar in your classpath, and if you also make it aware of your native library dependencies (i.e. libB depends on libA depends on libX, then when loading libB you can catch yourself and ensure you load libA first, and in checking that notice and load libX first. Then the OS doesn't try to find a library that isn't in your path. It's klunky and a bit painful, but ensuring Java finds them and loads them all in the correct order can work.
A: Static compilation proved to be the only way to webstart multiple dependent native libraries.
A: Are both native libraries packaged into a signed jar which is listed as
<nativelib ...>
In the JNLP file?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do you create your own moniker (URL Protocol) on Windows systems? How do you create your own custom moniker (or URL Protocol) on Windows systems?
Examples:
*
*http:
*mailto:
*service:
A: Take a look at Creating and Using URL Monikers , About Asynchronous Pluggable Protocols and Registering an Application to a URL Protocol from MSDN
A: Here's some old Delphi code we used as a way to get shortcuts in a web application to start a windows program locally for the user.
procedure InstallIntoRegistry;
var
Reg: TRegistry;
begin
Reg := TRegistry.Create;
try
Reg.RootKey := HKEY_CLASSES_ROOT;
if Reg.OpenKey('moniker', True) then
begin
Reg.WriteString('', 'URL:Name of moniker');
Reg.WriteString('URL Protocol', '');
Reg.WriteString('Source Filter', '{E436EBB6-524F-11CE-9F53-0020AF0BA770}');
Reg.WriteInteger('EditFlags', 2);
if Reg.OpenKey('shell\open\command', True) then
begin
Reg.WriteString('', '"' + ParamStr(0) + '" "%1"');
end;
end else begin
MessageBox(0, 'You do not have the necessary access rights to complete this installation!' + Chr(13) +
'Please make sure you are logged in with a user account with administrative rights!', 'Access denied', 0);
Exit;
end;
finally
FreeAndNil(Reg);
end;
MessageBox(0, 'Application WebStart has been installed successfully!', 'Installed', 0);
end;
A: Inside OLE from Craig Brockschmidt probably has the best coverage on monikers. If you want to dig a little deeper into this topic, I'd recommend getting this book. It is also contained on the MSDN disk that came along with VS 6.0, in case you still have that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How do I use more than one OpenID? I have more than one OpenID as I have tried out numerous. As people take up OpenID different suppliers are going to emerge I may want to switch provinders. As all IDs are me, and all are authenticated against the same email address, shouldn't I be able to log into stack overflow with any of them and be able to hit the same account?
A: In addition to the meta tag sample by Otto, you should be aware whether your provider supports OpenID 2.0 (there are numerous improvements). If it does use meta tags as the following:
<link rel="openid2.provider" href="http://www.loginbuzz.com/provider.axd" />
<link rel="openid2.local_id" href="http://example.loginbuzz.com/" />
<link rel="openid.server" href="http://www.loginbuzz.com/provider.axd" />
<link rel="openid.delegate" href="http://example.loginbuzz.com/" />
A good idea would also be to use secure links, but this could limit some relying parties from signing in. This could however be solved by providing a XRDS document.
The really neat thing about XRDS is that you are able to specify multiple providers in this document. Say you have a bunch of different accounts all with different providers supporting different extensions. The relying party are then able to select the best match by itself.
In the XRDS document you could also specify multiple URLs for each service, so that https is used when appropriate.
I would also recommend buying an i-name as it by design is more secure (the canonical ID - the i-number - associated with an i-name belongs to you even if the i-name expires).
A:
@prakesh
As long as you associate all of them
to the same email address, i would
think it would lead you to same
account.
But whats your experience?
When I tried it out I got a whole new account with 0 rep and no steenkin badges. So at the moment SO does not allow multiple OpenID's to be associated with the one account
A: I think each site that implements OpenID would have to build their software to allow multiple entries for your OpenID credentials. However, just because a site doesn't allow you to create multiple entries doesn't mean you can't swap out OpenID suppliers.
How to turn your blog into an OpenID
STEP 1: Get an OpenID. There a lots of servers and services out there you can use. I use http://www.myopenid.com
STEP 2: Add these two lines to your blog's main template in-between the <HEAD></HEAD> tags at the top of your template. Most all blog engines support editing your template so this should be an easy and very possible thing to do.
Example:
<link rel="openid.server" href="http://www.myopenid.com/server" />
<link rel="openid.delegate" href=http://YOURUSERNAME.myopenid.com/ />
This will let you use your domain/blog as your OpenID.
Credits to Scott Hanselman and Simon Willison for these simple instructions.
Switch Your Supplier
Now that your OpenID points to your blog, you can update your link rel href's to point to a new supplier and all the places that you've tied your blog's OpenID will use the new supplier.
A:
Doesn't using multiple open-id providers sort of undermine the point of open id?
No. Say you are using a Yahoo OpenID, but you decide to move to Google instead. Multiple OpenIDs per account allows you to associate your account with the Google OpenID, then deauthorize the Yahoo OpenID.
A: Doesn't using multiple open-id providers sort of undermine the point of open id?
A: The key here is to not change identities, ever.
Change providers, but not identities. (this is like real life)
So new users to OpenID should first consider what their identity could be.
Users that already have some kind of website they own should choose this URL and users without a website have these options:
*
*Get something like a blog to get a URL
*Buy an i-name (or a domain name)
*or use an identity provider supplied URL
In the case of the identity provider supplied URL, users need to be aware that if in the future they choose to delegate or change identities in some way that its essentially a new identity and that multiple identity support with RPs (and OPs) is limited (required usually to re-associate a local account on an RP site to a different OpenID identity).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Should the folders in a solution match the namespace? Should the folders in a solution match the namespace?
In one of my teams projects, we have a class library that has many sub-folders in the project.
Project Name and Namespace: MyCompany.Project.Section.
Within this project, there are several folders that match the namespace section:
*
*Folder Vehicles has classes in the MyCompany.Project.Section.Vehicles namespace
*Folder Clothing has classes in theMyCompany.Project.Section.Clothing namespace
*etc.
Inside this same project, is another rogue folder
*
*Folder BusinessObjects has classes in the MyCompany.Project.Section namespace
There are a few cases like this where folders are made for "organizational convenience".
My question is: What's the standard? In class libraries do the folders usually match the namespace structure or is it a mixed bag?
A: Also, note that if you use the built-in templates to add classes to a folder, it will by default be put in a namespace that reflects the folder hierarchy.
The classes will be easier to find and that alone should be reasons good enough.
The rules we follow are:
*
*Project/assembly name is the same as the root namespace, except for the .dll ending
*Only exception to the above rule is a project with a .Core ending, the .Core is stripped off
*Folders equals namespaces
*One type per file (class, struct, enum, delegate, etc.) makes it easy to find the right file
A: No.
I've tried both methods on small and large projects, both with single (me) and a team of developers.
I found the simplest and most productive route was to have a single namespace per project and all classes go into that namespace. You are then free to put the class files into whatever project folders you want. There is no messing about adding using statements at the top of files all the time as there is just a single namespace.
It is important to organize source files into folders and in my opinion that's all folders should be used for. Requiring that these folders also map to namespaces is unnecessary, creates more work, and I found was actually harmful to organization because the added burden encourages disorganization.
Take this FxCop warning for example:
CA1020: Avoid namespaces with few types
cause: A namespace other than the global namespace contains fewer than five types
https://msdn.microsoft.com/en-gb/library/ms182130.aspx
This warning encourages the dumping of new files into a generic Project.General folder, or even the project root until you have four similar classes to justify creating a new folder. Will that ever happen?
Finding Files
The accepted answer says "The classes will be easier to find and that alone should be reasons good enough."
I suspect the answer is referring to having multiple namespaces in a project which don't map to the folder structure, rather than what I am suggesting which is a project with a single namespace.
In any case while you can't determine which folder a class file is in from the namespace, you can find it by using Go To Definition or the search solution explorer box in Visual Studio. Also this isn't really a big issue in my opinion. I don't expend even 0.1% of my development time on the problem of finding files to justify optimizing it.
Name clashes
Sure creating multiple namespaces allows project to have two classes with the same name. But is that really a good thing? Is it perhaps easier to just disallow that from being possible? Allowing two classes with the same name creates a more complex situation where 90% of the time things work a certain way and then suddenly you find you have a special case. Say you have two Rectangle classes defined in separate namespaces:
*
*class Project1.Image.Rectangle
*class Project1.Window.Rectangle
It's possible to hit an issue that a source file needs to include both namespaces. Now you have to write out the full namespace everywhere in that file:
var rectangle = new Project1.Window.Rectangle();
Or mess about with some nasty using statement:
using Rectangle = Project1.Window.Rectangle;
With a single namespace in your project you are forced to come up with different, and I'd argue more descriptive, names like this:
*
*class Project1.ImageRectangle
*class Project1.WindowRectangle
And usage is the same everywhere, you don't have to deal with a special case when a file uses both types.
using statements
using Project1.General;
using Project1.Image;
using Project1.Window;
using Project1.Window.Controls;
using Project1.Shapes;
using Project1.Input;
using Project1.Data;
vs
using Project1;
The ease of not having to add namespaces all the time while writing code. It's not the time it takes really, it's the break in flow of having to do it and just filling up files with lots of using statements - for what? Is it worth it?
Changing project folder structure
If folders are mapped to namespaces then the project folder path is effectively hard-coded into each source file. This means any rename or move of a file or folder in the project requires actual file contents to change. Both the namespace declaration of files in that folder and using statements in a whole bunch of other files that reference classes in that folder. While the changes themselves are trivial with tooling, it usually results in a large commit consisting of many files whose classes haven't even changed.
With a single namespace in the project you can change project folder structure however you want without any source files themselves being modified.
Visual Studio automatically maps the namespace of a new file to the project folder it's created in
Unfortunate, but I find the hassle of correcting the namespace is less than the hassle of dealing with them. Also I've got into the habit of copy pasting an existing file rather than using Add->New.
Intellisense and Object Browser
The biggest benefit in my opinion of using multiple namespaces in large projects is having extra organization when viewing classes in any tooling that displays classes in a namespaces hierarchy. Even documentation. Obviously having just one namespace in the project results in all classes being displayed in a single list rather than broken into categories. However personally I've never been stumped or delayed because of a lack of this so I don't find it a big enough benefit to justify multiple namespaces.
Although if I were writing a large public class library then I would probably use multiple namespaces in the project so that the assembly looked neat in the tooling and documentation.
A: I'd say yes.
First, it will be easier to find the actual code files by following down the namespaces (say, when somebody e-mails you a naked exception call stack). If you let your folders go out of sync with namespaces, finding files in big codebases becomes getting tiring.
Second, VS will generate new classes you create in folders with the same namespace of its parent folder structure. If you decide to swim against this, it will be just one more plumbing job to do daily when adding new files.
Of course, this goes without saying that one should be conservative about how deep xis folder/namespace hierarchy goes.
A: Yes they should, only leads to confusion otherwise.
A: I think the standard, within .NET, is to try to do it when possible, but not to create unnecessarily deep structures just to adhere to it as a hard rule. None of my projects follow the namespace == structure rule 100% of the time, sometimes its just cleaner/better to break out from such rules.
In Java you don't have a choice. I'd call that a classic case of what works in theory vs what works in practice.
A: @lassevk: I agree with these rules, and have one more to add.
When I have nested classes, I still split them out, one per file. Like this:
// ----- Foo.cs
partial class Foo
{
// Foo implementation here
}
and
// ----- Foo.Bar.cs
partial class Foo
{
class Bar
{
// Foo.Bar implementation here
}
}
A:
What's the standard?
There is no official standard but conventionally the folder-to-namespace mapping pattern is most widely used.
In class libraries do the folders usually match the namespace
structure or is it a mixed bag?
Yes, in most class libraries the folders match the namespace for organizational ease.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "157"
} |
Q: Verifying files for testing I was working with quality yesterday doing some formal testing. In their procedure they were verifying all files on the test machine were pulled from the release. The way they were verifying these files were the same was by checking the size and the date/time stamp windows put on them in Windows Explorer. These happened to be off for another reason which I was able to find out why.
Is this a valid way to verify a file is the same? I didn't think so and started to argue, but I am younger here so thought I shouldn't push it too far. I wanted to argue they should do a binary compare on the file to verify its contents are exact. In my experience time/date stamps and size attributes don't always act as expected. Any thoughts???
A: The only 100% way to figure out if two files are equal is to do a binary comparison of the two.
If you can live with the risk of false positives (ie. two files which aren't 100% identical but your code says they are), then the digest and checksum algorithms can be used to lessen the work, particularly if the files lives on two different machines with less than optimal bandwidth so that a binary comparison is infeasible.
The digest and checksum algorithms all have chances of false positives, but the exact chance varies with the algorithm. General rule is that the more crypto-made it is, and the more bits it outputs, the less chance of a false positive.
Even the CRC-32 algorithm is fairly good to use and it should be easy to find code examples on the internet that implements it.
If you only do a size/timestamp comparison then I'm sorry to say that this is easy to circumvent and won't actually give you much of a certainty that the files are the same or different.
It depends though, if you know that in your world, timestamps are kept, and only changed when the file is modified, then you can use it, otherwise it holds no guarantee.
A: Hashing is very good. But the other, slightly lower tech alternative is to run a diff tool like WinMerge or TextWrangler and compare the two versions of each file. Boring and there's room for human error.
Best of all, use version control to ensure the files you're testing are the files you edited and the ones you're going to launch. We have checkout folders from our repo as the staging and live sites, so once you've committed the changes from your working copy, you can be 100% sure that the files you test, push to staging and then live are the same, because you just run "svn update" on each box and check the revision number.
Oh, and if you need to roll back in a hurry (it happens to us all sometime or another) you just run svn update again with the -r switch and go back to a previous revision virtually instantly.
A: I would do something like an md5sum hash on the files and compare that to the known hashes from the release. They will be more accurate than just date/time comparisons and should be able to be automated more.
A: The normal way is to compute a hash of the two files and compare that. MD5 and SHA1 are typical hash algorithms. md5sum should be installed by default on most unix type machines, and Wikipedia's md5sum article has links to some windows implementations.
A: You should do a CRC check on each file... from the wiki:
Cyclic redundancy check, a type of hash function used to produce a checksum, in order to detect errors in transmission or storage.
It produces an almost unique value based on the contents of the file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: DVCS Choices - What's good for Windows? So I want to get a project on a distributed version control system, such as mercurial, git, or bazaar. The catch is that I need the Windows support to be good, i.e. no instructions that start off with "install cygwin...". Now I've heard that git's Windows support is decent these days, but don't have any first hand experience. Also, it sounds like the bazaar team has an explicit goal of making it as multiplatform as possible.
Can I get any recommendations?
A: I use msys-git on windows every single day. Works fast and flawlessly.
Although the newer build has some problems with git-svn, this build (Git-1.5.5-preview20080413.exe) has a working git-svn.
A: There's a nice comparison between git, hg and bzr in this InfoQ article. They all have their strengths and weaknesses. You'll have to think about your project and your workflows and choose the best fit. The good news is that they're all fairly good.
A: At last I checked, the only thing you need for Mercurial is Python and to grab a binary package. If you find yourself with more time and want to fiddle / build it yourself, look here.
The only real drawback with HG is its idea of branching .. but for some people that's a major plus.
I like it because its intuitive, easy to install and works on anything that Python does. I don't think that all of the available plugins will work for you, but most should.
A: I've had the best luck with Bazaar, followed by Mercurial. Never could get Git to work correctly. A quick search shows that Git still requires clunky emulation layers like Cygwin/MSYS, and I can't find any integration tools like TortoiseBzr for Git.
With Mercurial in Windows, I had several minor issues (insensitive paths, symlinks, ). They were usually fixed eventually, but I felt that the same quality of testing was not applied to running on Windows as for the other platforms. Bazaar also had better documentation for integrating with native applications like Visual C.
A: EDIT: Perhaps add a "dvcs", "distrubutedversioncontrol", "distrubuted"
I've used Mercurial on Windows with no problems. You can use TortoiseHG or just use the command line. Mercurial does require Python, but that is easy to install in Windows as well.
Mercurial Binary Packages
A: I agree with basszero. I'm using mercurial under windows and it's as easy and reliable as it can get. My development team is spread over Europe (well Dublin and Vienna :-).
We use VPN to commit or sometime the built in webserver (hgserve). Both work fine with no problems out of the box.
Also diff3 open source tool works perfect with mercurial and TortoiseHG out of the box.
A: If you are concerned about an easy to use interface:
The bazaar folk now include TortoiseBzr in their windows binary package. That's got to be a pretty strong indicator that they think it is up to snuff. I don't know what the maturity/stability of TortoiseHg is, but there certainly isn't a decent GUI interface for git yet, and the MSYS git build still needs some work IMO.
If your team are comfortable with or prefer the command line, then either bazaar or mercurial would probably work well for you, and are both probably about the same in terms of learning curve. Git's learning curve is much higher. It is like the swiss-army knife that is almost wider than it is long, with all the little gadgets and do-dads in it and hanging off it, with the springs so tight that you occasionally slice a finger open trying to prise a blade out.
A: In my experience using GIT on windows is a major pain. But I have been using Fossil SCM for some time now, and I think it actually fits your needs exactly.
It also has a built in Ticket system and a Wiki. And the whole program is contained in 1 file and it works right out of the box.
I totally recommend it.
Here is a link to the site http://www.fossil-scm.org/
Remember, this site is self hosting, what that means is you are looking at the web interface to fossil it self, when you look at tickets and the wiki and documentation, you actually are using fossil.
But if your project has millions of lines of code and is a few gigabytes in size, you have to use GIT, there is no way around that problem.
Enjoy.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do I create a Class using the Singleton Design Pattern in Ruby? The singleton pattern is a design pattern that is used to restrict instantiation of a class to one object instance. Although I know how to code the singleton pattern in C++ and Java, I was wondering if anyone know how to implement it in Ruby?
A: Use the singleton module:
class Clazz
include Singleton
end
See http://www.ruby-doc.org/stdlib/libdoc/singleton/rdoc/index.html for more info.
A: Actually, the above answer was not completely correct.
require 'singleton'
class Example
include Singleton
end
You also need to include the require 'singleton' statement.
A: You could use modules to the same effect I believe, although its not "the singleton pattern" you can have global state that way (which is what a singleton is ! Naughty global state !).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Automating VMWare or VirtualPC I'm currently experimenting with build script, and since I have an ASP.net Web Part under source control, my build script should do that at the end:
*
*Grab the "naked" Windows 2003 IIS VMWare or Virtual PC Image from the Network
*Boot it up
*Copy the Files from the Build Folder to the Server
*Install it
*Do whatever else is needed
I have never tried automating a Virtual Machine, but I saw that both VMWare and Virtual Server offer automation facilities. While I cannot use Virtual Server (Windows XP Home :-(), Virtual PC works.
Does anyone here have experience with either VMWare Server or Virtual PC 2007 SP1 in terms of automation?
Which one is better suited (I run windows, so the Platform-independence of VMWare does not count) and easier to automate?
A: Use https://github.com/dblock/vmwaretasks rather than the raw VixCOM API if you're going to do this in C#.
A: I agree with Chris.
Virtual Machine Automation APIs is a very good possibility for automating of virtual machine operations.
VIX API Version 1.6.2 can be used for automating of ESX guest operations as well.
A: With VMWare, there is the Virtual Machine Automation APIs (VIX API). You can find the reference guide here. It works with VMWare Server and WorkStation, but AFAIK it's not available for ESX Server.
From the main page for VIX:
The VIX API allows you to write
scripts and programs that automate
virtual machine operations. The API is
high-level, easy to use, and practical
for both script writers and
application programmers. It runs on
VMware Server and Workstation
products, both Windows and Linux.
Bindings are provided for C, Perl, and
COM (Visual Basic, VBscript, C#).
A: VirtualBox also has API's for automating their VM's.
A: To follow-up to @Chris, ESX is extremely scriptable. A client I've been working with recently has built a web service that launches a VMware script to create the VM they need, then start the VM with a custom boot ISO. That ISO includes all the kickstart or unattend.txt info it needs to do a totally unassisted OS build.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Recommended Fonts for Programming? What fonts do you use for programming, and for what language/IDE? I use Consolas for all my Visual Studio work, any other recommendations?
A: Inconsolata 14pt in TextMate
A: I like Profont, I first came across it when Jeff blogged about programming fonts
A: I've really fallen in love with Droid Sans Mono.
A: I like Consolas too, but I also like Anonymous: http://www.ms-studio.com/FontSales/anonymous.html
A: Adding a vote for Consolas. It feels very easy on my eyes.
A: I never found a reason to stray from Courier New. I don't think I'd have a problem with any font so long as it's sans-serif. Mono-spaced fonts are nice for coding, too.
A: I use a proportional font too. They seem good for the same reasons they work in books and magazines: the more variation between characters, the easier it is for the brain to distinguish them; and you can fit more on the screen. Indentation still works fine: 6 leading spaces is still twice as wide as 3 leading spaces.
I use a version of Georgia that I hacked to make the lower case "l" look less like the digit "1", and put a slash through the zero.
A: I really really like DejaVu Sans Mono. It is very clean and easy on the eyes.
A: +1 for Monaco
alt text http://img.skitch.com/20080908-nmjji28uerreqpprs1h86gxna9.png
Just beautiful and I find I can read it for hours on end.
A: I think the anti-aliasing blur on Consolas is caused by monitors which do not have ClearType enabled. Consolas was designed for ClearType.
[Jeff A: indeed, you can see screenshots of this in a post I wrote on this topic.]
A: Two pages where there's a long list of programming fonts are these pages on keithdevens.com and lowing.org (dead link, but it's in the internet archive)
Some other discussions of programming fonts that may have more suggestions are the comments to this blog post on typographica and this topic on a text editor's forum.
Personally I like Triskweline:
alt text http://www.netalive.org/tinkering/triskweline/shot.gif
A: Instead of just chiming in with another vote for a particular font, I'd recommend reading these comparisons of programming fonts where you can learn a little more:
Jeff Atwood's excellent "round-up":
http://www.codinghorror.com/blog/archives/000157.html
Another review of 5 fonts with nice screenshots:
http://blog.hamstu.com/2008/02/03/the-typography-of-code/
A: I use Consolas for everything, including Notepad++, SQL Studio, Eclipse, etc. I wish there was a Mac version. Also, if you notice, the text area field on Stack Overflow uses Consolas, so we have some other fans out there as well :p
A: DejaVu Sans Mono (sometimes known as Panic Sans), size 11, anti-alised. Previously I only used fonts that weren't anti-aliased, but it just seems to work for this font.
A: I like Envy Code R.
A: Verdana - Variable width and easy to read on screen at small sizes.
A: Back in my Mac LC days I swore by Monaco 9pt, mostly for it's slashed 0. I never quite got used to the default line-height though.
monaco sample http://www.k8zt.com/ham_fonts/monaco.jpg
It's a little hard to track down in the original non-OS-X version.
A: +1 for Consolas, together with a proper Color Scheme (I use the white one at the first screenshot)
A:
I never found a reason to stray from Courier New. I don't think I'd have a problem with any font so long as it's sans-serif. Mono-spaced fonts are nice for coding, too.
Courier New has serifs.
A: Lucida Sans Typewriter
A: Another vote for Consolas. My favorite IDE font at the moment.
A: Raize Font
The Raize Font is a clean, crisp, fixed-pitched sans serif screen font that is much easier to read than the fixed pitched fonts that come with Windows. Ideally suited for programming, scripting, html writing, etc., the Raize Font can be used in any IDE or text editor.
A: Monaco, 11pt, antialias, on Mac OS X. Looks ever better, and crisper on darker backgrounds.
alt text http://www.fabernitor.net/ayaz/monaco11pt.png
A: Consolas. Italic for comments. Only way. Nahh just kidding, the best programming font is this! Here's your first C program:
The image link must not be working, tell me in a comment http://img40.imageshack.us/img40/8008/picture1iqv.png
Recommended for high readability.
A: +1 for Monaco, although this blog post is making me think about switching to Inconsolata.
I'm curious as to what point size y'all use, I use the TextMate default size of 12pt.
A: I use Bitstream Vera Sans Mono, but you need to activate ClearType to get it readable .
I like the 'Illegal1 = O0' readablility test, mentioned earlier in this thread, thanks for that.
A: Anarch, 32 points, ofcourse. Code with style!
anarch http://img525.imageshack.us/img525/1584/ss42po1.jpg
A: For UltraEdit and anything for that matter, I use the good old Courier New.
alt text http://www.identifont.com/samples/microsoft/CourierNew.gif
I've found Consolas to difficult to read with it's over anti-aliasing.
A: I use Lucida Console for years and never find anything better.
However I tried a few times Consolas fonts and simply -- I prefer Lucida Console.
A: I like Terminus for some command line stuff, at least scrolling log files and irssi/irc (TTF versions available). Screenshot of the terminus.ttf in action below (PuTTY on Windows XP with ClearType enabled).
Screenshot of the terminus.ttf in action below (PuTTY on Windows XP with ClearType enabled). http://misc.nybergh.net/pub/fonts/terminus/2008-09-08_terminus_ttf_in_gnu_nano_putty_windows_xp_cleartype_screenshot.png
A: I'm going to make some enemies with this, but I actually use -- gasp -- a non-monospace font! I occasionally switch back to a monospace to disambiguate something, but mostly find that a good clean sans-serif font is easiest to read and doesn't waste screen estate.
An IDE with good syntax colouring helps.
A: I second Consolas, Inconsolata, DejaVu Sans Mono, and Droid Sans Mono, with my preference going towards the Droid one.
A: Neep Alt 13/17 is very good.
A: My favourite is ProggyClean at 11px. I've been using it for 2-3 years and it's great for getting lots on screen without being painful to read. It deserves even more attention than the couple of mentions it's had so far:
Proggy Clean http://www.proggyfonts.com/download/example_proggy_clean.gif
The site has many variations including slashed zeroes, bold for function marks etc:
Proggy Square http://www.proggyfonts.com/download/example_proggy_square_bp.gif
(As an aside, my most-loved favourite text editor, TextPad, allows you to have different fonts and font sizes for different file types, which is a really great feature.)
A: Until I found ProggyTiny, I always made my own fonts using Softy. It's surprisingly easy, and might increase your productivity if you're annoyed by some features of your current font (like "Q is too similiar to 0").
A: Bitstream vera sans, a Gnome font. I find its much clearer than Consolas, which is pretty good too.
A: I use Terminuse in almost everything (Eclipse, putty and other terminals): http://fractal.csie.org/~eric/wiki/Terminus_font
I must say that I don't get it why most people use small fonts like 9pt, do you have 14" monitors or what?
For me the best way is to use font size that makes my monitor display at most one 30-40 line method, this way I need to create smaller methods :)
A: I use MonteCarlo, which is based on ProFont but has a bold face too. That way IDEs/editors that use bold as part of their syntax highlighting leave your text still properly fixed width.
java example http://bok.net.nyud.net/MonteCarlo/images/java-example.png
quick brown fox example http://bok.net.nyud.net/MonteCarlo/images/screenshot-small.gif
Like ProFont, Proggy & others, its quite small (& being bitmap based, obviously doesn't scale), but I like a small font for coding and its still extremely clear and easy on the eyes.
A: Either Consolas (download) or Andale Mono (download). I mostly use Andale Mono. I wrote an article about programming fonts a long time ago, I think Consolas wasn't even out yet.
http://www.deadprogrammer.com/photos/fonts.gif
I find that typing Illegal1 = O0 is a good test of suitability.
A: I use Consolas on my mac, BTW; here's a link to download the consolas TTF files if you want to install this (Mac/Win/Linux).
/mp
A: I don't use Consolas, though it does look good on LCD, but sometimes I'm not on LCD, like when I'm giving presentations and then it looks crap.
My current font of choice for programming is the Liberation Mono font.
Oh man, just discovered why the text on Stack Overflow looks like crap, it forces Consolas which is a cleartype font, and on my current setup which didn't have cleartype enabled, it looks very bad.
Going to make a bugreport on uservoice.
A: I have been using the Dina - http://www.donationcoder.com/Software/Jibz/Dina/index.html - font for awhile now for text editing and it seems to be doing the job nicely.
A: ProFont. Am I the only one still using it?
A: I like Fixedsys in Visual Studio. It's a classic. No anti-aliasing blur.
A: I'm amazed nobody has mentioned Pragmata. It's the BMW of programming fonts. Condensed, readable, and the pinnacle of simple elegance.
alt text http://www.fsd.it/fonts/imm/pr_abc.gif
There is now a fundraising project going on for PragmataPro (which covers a larger portion of Unicode than Pragmata) to make it available for free under a Creative Commons license!
A: A excellent CodeProject article that list 33 fonts for programming (With examples of each)
http://www.codeproject.com/KB/work/FontSurvey.aspx
A: I use Inconsolata with UltraEdit on Windows. With TextMate (on the Mac) I prefer Monaco (it's the default font).
A: I have to agree with Kevin Kenny, Proggy fonts all the way, though I prefer Proggy Clean. But either way you have to go with a font that clearly shows the difference between the number 0 and the letter O. Which the preview font here doesn't really show that.
A: I'm on PanicSans 12pt w/ AA on TextMate, but loving Inconsolata on Terminal/vim... (debating changing my TM font to this one... but point size 14pt) :)
A: Consolas for me as well
A: I just tried Consolas and Envy - Envy seems "too narrow" to my eyes, but Consolas looks great (I am on a mac). Thanks for the tips !
A: Courier New for me as well, it's well spaced.
A: Another vote for Consolas for code editing, and Dina for console output.
A: Lucida Console every time.
I've never found a font that can pack as many lines of code onto the screen at the same point size without looking cramped.
And it looks nice too.
A: I just recently switched from Bitstream Vera Sans Mono to Inconsolata, but reading the answers here, I'm going to give Consolas a chance for a bit. Looks really nice so far.
A: I love consolas, especially with italics for comments. The little italic curlicues are so cute :P
A: @modesty:
I wish there was a Mac version.
You can install the font on a Mac. I use it all the time, everywhere, without any problem. The only thing to pay attention for is to set nomacatsui when working with GVIM, or better yet, switch to MacVim.
A: Another vote up for Dina. As long as you use it at its optimum size (9 pt), it looks great.
A: For quite some time I've been using ProFont, mainly because it allows a lot of lines fit into a given height (a lot more than say Consolas or others). Consolas is not bad either, though...
A: I never considered changing my font, I have always been happy with Courier. This thread has truely opened my eyes, if only I could upvote it!
Went with Droid Sans Mono.
A: I like Consolas myself, but when it comes to monospaced fonts there are quite a few other options to choose from:
A: Don't forget the colours!
For some reason Delphi 7 in Twilight does not render Droid Sans Mono well, but in Visual Studio with an orange on black theme it is excellent. Deja Vu Sans Mono is the best all rounder. I use it almost everywhere. Consolas would be excellent apart from its ugly Q glyph.
One other thing I have found since I entered the world of work is that even though I have great eyesight I like to keep my code font around 12 or 13pt size both to reduce eye strain and to make sure I can't put too much text on screen. It's sort of an incentive to keep code blocks vertically short.
I note that this edit box does not respect my browser's default monospaced font. It's giving me Monaco (I'm on OSX). Monaco is horrible. It's glyphs have poorly angled elements and it's capitals are not well proportioned.
Oh, and it almost doesn't matter on Windows because your font will not look right anyway. /me dons flame retardent suit
A: Tahoma is very readable.
If you need it larger then use Verdana.
A: ProFont is a great font for code, Consolas a 2nd runner up. You could always go retro with a little Terminal font for a little nostalgia (customize the background color to black and foreground font to green for the full effect!).
A: In bash and vim I use Lucida Typewriter, but in Kate, Scintilla, Eclipse, and Netbeans I (currently) use Lucida Casual, i.e., a proportional font. Ten years ago I started using proportional fonts in Visual Studio (MS Comic Sans) and it works very well for me. Colored syntax highlighting in said IDEs provides excellent readability and for text-heavy languages like HTML and LaTeX a proportional font is a natural choice.
A: I like consolas too.
A: Consolas, works great for various font sizes, and I can't find anything better.
A: Consolas - recently switched over to it and it's lovely.
A: Consolas and Courier New under Windows, Inconsola under *nix. I really miss the old IBM terminal fonts, though. The one from green/orange terminals.
A: I'd also have to add another vote for Android's "Droid Sans Mono". It's a very crisp, clear coding font.
A: I experimented with Myriad until I realised using a variable-width font was a fools game.
Courier New here, although I am going to try out Envy after seeing it here.
A: Consolas for Visual Studio. It is the first thing I change when getting a new install setup. The second is inverting the main colors, white text on black background is much easier to stare at for hours in my opinion.
Black text on white background http://college-code.com/stackoverflow/black_on_white.PNG
Versus
White text on black background http://college-code.com/stackoverflow/white_on_black.PNG
The second one tends to make my eyes bleed less after long coding sessions. Could be my code however.
A: Consolas all the way.
A: Lucida Console or Lucida Sans Typewriter, as small as possible so I can maximize the amount of code on the screen. Occasionally Courier or Monaco (e.g. Monaco in TextMate).
A: I'm a happy user of ProFont originally available on the Mac, now available for everyone.
A: If you're like me and only swear by serifs try Kourier with a K, a somewhat more compact Courier .
A: It must be noted that the text editor/IDE that you use determines how good a font will look. I love UltraEdit, but the only font it renders properly is Courier New. It blurs out about all other useful monospace fonts. However, Visual Studio does a great job rendering any font accurately.
Currently, I will vote Consolas. Though, I will try some of the others listed in the responses. Thank you. Btw, please post links to download!
A: I'm digging the DejaVu Sans Mono (it's supposed to be the same as Panic Sans) on my Mac.
A: +1 Verdana -- agree with pauldoo
A variable width font for coding is probably not to everyone's taste but I really like Verdana's legibility with ClearType.
A: I have been using Proggy Clean TT with Visual Studio for a couple of years now. I like the ability to choose a zero slashed font so when management decides to program instead of manage they don't confuse 0101 with 0101(zeros).
http://www.proggyfonts.com/
A: Consolas unless I'm runing over a slow RDP connection with font smoothing turned off, then Lucida Console.
A: Its already been said a few times but http://www.proggyfonts.com/ is just awesome. Im a big fan of the Proggy Clean Slashed Zero Bold Punctuation. I do most my work in c# so the bold punctuation is very nice for it.
A: I like ProFont TT >tweaked< It's clean and there is a clear difference between 1, l and I and 0 and O.It works best at 9pt. It doesn't scale up very well.
A: Verdana.
Easy to read, and, very imporetant, easy to distinguish similar characters like O and 0, ( and {, 1 and I and l etc.
A: Consolas for me. These were specially developped for LCD + MS hint engine. Also you might find ClearType tuner (MS PowerToy) a great addition as it gives you more control over how your fonts look.
A: For VS nothing beat Fixedsys.
A: I prefer Profont.
A: I actually bought The Sans Mono Condensed, which is (was) the goto code font in O'Reilly titles. It's by the same guy as did Consolas for Microsoft (but Consolas wasn't available when I bought it).
It's a really nice, tight, clear face - works really well on slides if you're doing that sort of thing as well.
A: I'd never heard of Droid Sans Mono before, but I installed it and tried it at 9 points, and I must say it's by far the highest quality mono font I've seen on Linux.
On my Mac it's Panic Sans all the way, using it at 11 or 12 points allow anti-aliasing that actually works on monospace, which I've never seen before.
A: Monaco 10pt for me.
A: I've been hanging on to this link for more than a year, it's an article entitled "Five great programming fonts". The five are good fonts, but the article includes comments with a dozen more interesting answers.
http://forums.programming-designs.com/viewtopic.php?pid=3338
A: I recommend Lucida Console for Windows users and Adobe Courier for Linux/Unix, with a size of 10pt these fonts looks great! and very legible
Edit:
I've been saying that using Lucida Console was a real good option, well, now I know Consolas :)
A: I use Inconsolata in both Linux and Mac OS X.
A: 6x13. You can get two terminal or editor windows across a 1024x768 and three onto a 1600x1200 screen. A windows version of this font can be found Here.
A: I prefer Consolas as well, and obviously cleartype helps when using other fonts.
A: Lucida Console isn't so good because the bold text takes up more room than the non-bold text. Consolas overcomes this.
A: monaco 12pt, is there any other way?
A: I use Bitstream Vera http://www.gnome.org/fonts/ for Visual Studio 2008 paired with the Darkness Theme because my eyes can't deal with white backgrounds.
A: Verdana - Once I realised that I didn't HAVE to use a mono-spaced font ;-)
A: Consolas
I use it everywhere, I use it for everything.
Advice: stick to it.
A: I've been using Anonymous, but I'll need to check out some of these other fonts.
A: I just use Courier New, or whatever monospace font I have available.
However, I sometimes like using sans-serif (currently Comic Sans MS) for comments in Notepad++. (However, I now tend more to switch everything to monospace just for consistency in spacing and such.)
A: Nobody's mentioned it yet, so let me just mention DejaVu Sans Mono, which is a fork of Vera Sans Mono, and is included in most Linux distribs. It supports most of Unicode.
A: Yet another vote from me for Consolas. I use it since I learned about it from Jeff's blog post. Thanks to you for this advice.
It made me improve an aspect of my daily programming life, which I didn't think about much before.
A: Bitstream Vera Sans Mono. [http://www.dafont.com/bitstream-vera-mono.font]
A: bitstream vera sans mono
A: Any monospace font, really. I honestly don't find it matters too much past that.
A: arial is best
A: I use ForMateKonaVe, which is a merge of Bitstream Vera Sans Mono and a half-width'd Konatsu. I use a lot of Japanese here and there and this is the best way to display it in TextMate.
A: Fixedsys Excelsior 2.00, Raize, and the usuals.
http://kaishaku.org/codefonts/
A: try Lucida Grande..
Amazing!!
A: -2 for Bitstream Vera Sans Mono -- it has an dotted zero - released this font as an free download after an modification.
+2 for Prima Sans Mono -- lacks an dotted zero - need an free download for RapidShare to extend the font to an terminal.
A: I swear by DejaVu Sans Mono
A: Any sans-serif.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "182"
} |
Q: Why should I learn Lisp? I really feel that I should learn Lisp and there are plenty of good resources out there to help me do it.
I'm not put off by the complicated syntax, but where in "traditional commercial programming" would I find places it would make sense to use it instead of a procedural language.
Is there a commercial killer-app out there that's been written in Lisp ?
A: If you like programming you should learn Lisp for the pure joy of it. XKCD perfectly expresses the intellectual enlightenment that ensues. Learning Lisp is for the programmer what meditation is for the Buddhist monk (and I meant this without any blasphemous connotation).
A: Any language looks a lot harder when one doesn't use the common indentation conventions of a language. When one follows them of Lisp, one sees how it expresses a syntax-tree structure quite readily (note, this isn't quite right because the preview lies a little; the r's should align with the fns in the recursive quicksort argument):
(defun quicksort (lis)
(if (null lis)
nil
(let* ((x (car lis))
(r (cdr lis))
(fn (lambda (a)
(< a x))))
(append (quicksort (remove-if-not fn
r))
(list x)
(quicksort (remove-if fn
r))))))
A: One of the main uses for Lisp is in Artificial Intelligence. A friend of mine at college took a graduate AI course and for his main project he wrote a "Lights Out" solver in Lisp. Multiple versions of his program utilized slightly different AI routines and testing on 40 or so computers yielded some pretty neat results (I wish it was online somewhere for me to link to, but I don't think it is).
Two semesters ago I used Scheme (a language based on Lisp) to write an interactive program that simulated Abbott and Costello's "Who's on First" routine. Input from the user was matched against some pretty complicated data structures (resembling maps in other languages, but much more flexible) to choose what an appropriate response would be. I also wrote a routine to solve a 3x3 slide puzzle (an algorithm which could easily be extended to larger slide puzzles).
In summary, learning Lisp (or Scheme) may not yield many practical applications beyond AI but it is an extremely valuable learning experience, as many others have stated. Programming in a functional language like Lisp will also help you think recursively (if you've had trouble with recursion in other languages, this could be a great help).
A: In response to @lassevk:
A: I found that learning a new language, always influences your programming style in languages you already know. For me it always made me think in different ways to solve a problem in my primary language, which is Java. I think in general, it just widens your horizon in term of programming.
A: I took a "lisp class" in college back in the eighties. Despite grokking all the concepts presented in the class, I was left without any appreciation for what makes lisp great. I'm afraid that a lot of people look at lisp as just another programming language, which is what that course in college did for me so many years ago. If you see someone complaining about lisp syntax (or lack thereof), there's a good chance that they're one of those people who has failed to grasp lisp's greatness. I was one of those people for a very long time.
It wasn't until two decades later, when I rekindled my interest in lisp, that I began to "get" what makes lisp interesting--for me anyway. If you manage to learn lisp without having your mind blown by closures and lisp macros, you've probably missed the point.
A: Learning LISP/Scheme may not give you any increased application space, but it will help you get a better sense of functional programming, its rules, and its exceptions.
It's worth the time investment just to learn the difference in the beauty of six nested pure functions, and the nightmare of six nested functions with side effects.
A: From http://www.gigamonkeys.com/book/introduction-why-lisp.html
One of the most commonly repeated
myths about Lisp is that it's "dead."
While it's true that Common Lisp isn't
as widely used as, say, Visual Basic
or Java, it seems strange to describe
a language that continues to be used
for new development and that continues
to attract new users as "dead." Some
recent Lisp success stories include
Paul Graham's Viaweb, which became
Yahoo Store when Yahoo bought his
company; ITA Software's airfare
pricing and shopping system, QPX, used
by the online ticket seller Orbitz and
others; Naughty Dog's game for the
PlayStation 2, Jak and Daxter, which
is largely written in a
domain-specific Lisp dialect Naughty
Dog invented called GOAL, whose
compiler is itself written in Common
Lisp; and the Roomba, the autonomous
robotic vacuum cleaner, whose software
is written in L, a downwardly
compatible subset of Common Lisp.
Perhaps even more telling is the
growth of the Common-Lisp.net Web
site, which hosts open-source Common
Lisp projects, and the number of local
Lisp user groups that have sprung up
in the past couple of years.
A: complicated syntax??
The syntax for lisp is incredibly simple.
Killer app written in lisp: emacs. Lisp will allow you to extend emacs at will to do almost anything you can think of that an editor might do.
But, you should only learn lisp if you want to, and you may never get to use at work ever, but it is still awesome.
Also, I want to add: even if you find places where lisp will make sense, you will probably not convince anyone else that it should be used over java, c++, c#, python, ruby, etc.
A: If you have to ask yourself if you should learn lisp, you probably don't need to.
A: Learning lisp will put Javascript in a completely different light! Lisp really forces you to grasp both recursion and the whole "functions as first class objects"-paradigm. See Crockfords excellent article on Scheme vs Javascript. Javascript is perhaps the most important language around today, so understanding it better is immensely useful!
A: "Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot."
--Eric S. Raymond, "How to Become a Hacker"
http://www.paulgraham.com/avg.html
A: I agree that Lisp is one of those languages that you may never use in a commercial setting. But even if you don't get to, learning it will definitely expand your understanding of programming as a whole. For example, I learned Prolog in college and while I never used it after, I gave me a greater understanding of many programming concepts and (at times) a greater appreciation for the languages I do use.
But if you are going to learn it...by all means, read On Lisp
A: Complicated syntax? The beauty of lisp is that it has a ridiculously simple syntax. It's just a list, where each element of the list can be either another list or an elementary data type.
It's worth learning because of the way it enhances your coding ability to think about and use functions as just another data type. This will improve upon the way you code in an imperative and/or object-oriented language because it will allow you to be more mentally flexible with how your code is structured.
A: Okay, I might be weird but I really don't like Paul Graham's essays that much & on Lisp is a really rough going book if you don't have some grasp of Common Lisp already. Instead, I'd say go for Siebel's Practical Common Lisp. As for "killer-apps", Common Lisp seems to find its place in niche shops, like ITA, so while there isn't an app synonymous with CL the way Rails is for Ruby there are places in industry that use it if you do a little digging.
A: Gimp's Script-Fu is lipsish. That's a photoshop-killer app.
A: I can't answer from first-hand experience but you should read what Paul Graham wrote on Lisp. As for the "killer-app" part, read Beating the averages.
A: To add to the other answers:
Because the SICP course (the videos are available here) is awesome: teaches you Lisp and a lot more!
A: Killer app? Franz Inc. has a long list of success stories, but this list only includes users of AllegroCL... There are probably others. My favourite is the story about Naughty Dog, since I was a big fan of the Crash Bandicoot games.
For learning Common Lisp, I'd recommend Practical Common Lisp. It has a hands-on approach that at least for me made it easier than other books I've looked at.
A: You could use Clojure today to write tests and scripts on top of the Java VM. While there are other Lisp languages implemented on the JVM, I think Clojure does the best job of integrating with Java.
There are times when the Java language itself gets in the way of writing tests for Java code (including "traditional commercial programming"). (I don't mean that as an indictment of Java -- other languages suffer from the same problem -- but it's a fact. Since the topic, not Java, I won't elaborate. Please feel free to start a new topic if someone wants to discuss it.) Clojure eliminates many of those hindrances.
A: Lisp can be used anywhere you use traditional programming. It's not that different, it's just more powerful. Writing a web app? you can do it on Lisp, writing a desktop application? you can do it on Lisp, whatever, you can probably do it on Lisp, or Python, or any other generic programming (there are a few languages that are suited for only one task).
The biggest obstacle will probably be acceptance of your boss, your peers or your customers. That's something you will have to work with them. Choosing a pragmatic solution like Clojure that can leverage the current install base of Java infrastructure, from the JVM to the libraries, might help you. Also, if you have a Java program, you may do a plug-in architecture and write Clojure plug-ins for it and end up writing half your code in Clojure.
A: I programmed in Lisp professionally for about a year, and it is definitely worth learning. You will have unparalleled opportunity to remove redundancy from your code, by being able to replace all boilerplate code with functions where possible, and macros where not. You will also be able to access unparalleled flexibility at runtime, translating freely between code and data. Thus, situations where user actions can trigger the need to build complex structures dynamically is where Lisp truly shines. Popular airline flight schedulers are written in Lisp, and there is also a lot of CAD/CAM in Lisp.
A: Lisp is a large and complex language with a large and complex runtime to support it. For that reason, Lisp is best suited to large and complicated problems.
Now, a complex problem isn't the same as a complicated one. A complex problem is one with a lot of small details, but which isn't hard. Writing an airline booking system is a complex business, but with enough money and programmers it isn't hard. Get the difference?
A complicated problem is one which is convoluted, one where traditional divide and conquer doesn't work. Controlling a robot, or working with data that isn't tabular (languages, for example), or highly dynamic situations.
Lisp is really well suited to problems where the solution must be expandable; the classic example is the emacs text editor. It is fully programmable, and thus a programming environment in it's own right.
In his famous book PAIP, Norvig says that Lisp is ideal for exploratory programming. That is, programming a solution to a problem that isn't fully understood (as opposed to an on-line booking system). In other words: Complicated problems.
Furthermore, learning Lisp will remind you of something fundamental that has been forgotten: The difference between Von Neumann and Turing. As we know, Turing's model of computation is an interesting theoretical model, but useless as a model for designing computers. Von Neumann, on the other hand, designed a model of how computers and computation were to execute: The Von Neumann model.
Central to the Von Neumann model is that you have but one memory, and store both your code and your data there. Notice carefully that a Java program (or C#, or whatever you like) is a manifestation of the Turing model. You set your program in concrete, once and for all. Then you hope you can deal with all data that gets thrown on it.
Lisp maintains the Von Neuman model; there is no sharp, pre-determined border between code and data. Programming in Lisp opens your mind to the power of the Von Neumann model. Programming in Lisp makes you see old concepts in a new light.
Finally, being interactive, you'll learn to interact with your programs as you develop them (as opposed to compile and run). This also change the way you program, and the way you view programming.
With this intro I can finally offer a reply to your question: Will you find places where it outshines "traditional" languages?
If you are an advanced programmer, you need advanced tools. And there is no tool more advanced than Lisp.
Or, in other words: The answer is yes if your problems are hard. No otherwise.
A: Lisp is very useful for creating little DSLs. I've got a copy of Lisp in a Box running at work and I've written little DSLs to interrogate SQL server databases and generate data layers etc in C#. All my boiler plate code is now written in lisp macros that output to C#. I generate HTML, XML, all sorts of things with it. While I wish I could use Lisp for everyday coding, Lisp can bring practical benefits.
A: This is a topic i myself have pondered for a while but I have not really come to a decision, as usual time is the main problem... ;)
And since I can´t find these links sofar in this post i add them for public interest:
Success and Failure story:
Lisping at JPL
Really impressive success story:
Lisp in use at the Orbitz corporation
Comparison and analysis of whether to use Lisp instead of Java:
Lisp as an Alternative to Java
A: Not a reason but (trivial) AutoCAD has LISP & DCL runtime support. It is a convenient way to write complex macros (including ActiveX automation) if you don't want to use VBA or their C++ or .NET SDKs, or if a DIESEL expression doesn't cut it.
A lot of AutoCAD's functions are actually LISP routines.
A: Syntax is irrelevant, readability is not!
A: Not saying this is a killer app but it looks like it could be cool
http://code.google.com/p/plop/
A: Killer app? The flight search engine by ITA Software is one.
As for "why", it will most probably make you a better developer and is extremnely unlikely to make you a worse one. It may, however, make you prefer lisp dialects to other languages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "130"
} |
Q: Learning Regular Expressions I don't really understand regular expressions. Can you explain them to me in an easy-to-follow manner? If there are any online tools or books, could you also link to them?
A: The most important part is the concepts. Once you understand how the building blocks work, differences in syntax amount to little more than mild dialects. A layer on top of your regular expression engine's syntax is the syntax of the programming language you're using. Languages such as Perl remove most of this complication, but you'll have to keep in mind other considerations if you're using regular expressions in a C program.
If you think of regular expressions as building blocks that you can mix and match as you please, it helps you learn how to write and debug your own patterns but also how to understand patterns written by others.
Start simple
Conceptually, the simplest regular expressions are literal characters. The pattern N matches the character 'N'.
Regular expressions next to each other match sequences. For example, the pattern Nick matches the sequence 'N' followed by 'i' followed by 'c' followed by 'k'.
If you've ever used grep on Unix—even if only to search for ordinary looking strings—you've already been using regular expressions! (The re in grep refers to regular expressions.)
Order from the menu
Adding just a little complexity, you can match either 'Nick' or 'nick' with the pattern [Nn]ick. The part in square brackets is a character class, which means it matches exactly one of the enclosed characters. You can also use ranges in character classes, so [a-c] matches either 'a' or 'b' or 'c'.
The pattern . is special: rather than matching a literal dot only, it matches any character†. It's the same conceptually as the really big character class [-.?+%$A-Za-z0-9...].
Think of character classes as menus: pick just one.
Helpful shortcuts
Using . can save you lots of typing, and there are other shortcuts for common patterns. Say you want to match a digit: one way to write that is [0-9]. Digits are a frequent match target, so you could instead use the shortcut \d. Others are \s (whitespace) and \w (word characters: alphanumerics or underscore).
The uppercased variants are their complements, so \S matches any non-whitespace character, for example.
Once is not enough
From there, you can repeat parts of your pattern with quantifiers. For example, the pattern ab?c matches 'abc' or 'ac' because the ? quantifier makes the subpattern it modifies optional. Other quantifiers are
*
** (zero or more times)
*+ (one or more times)
*{n} (exactly n times)
*{n,} (at least n times)
*{n,m} (at least n times but no more than m times)
Putting some of these blocks together, the pattern [Nn]*ick matches all of
*
*ick
*Nick
*nick
*Nnick
*nNick
*nnick
*(and so on)
The first match demonstrates an important lesson: * always succeeds! Any pattern can match zero times.
A few other useful examples:
*
*[0-9]+ (and its equivalent \d+) matches any non-negative integer
*\d{4}-\d{2}-\d{2} matches dates formatted like 2019-01-01
Grouping
A quantifier modifies the pattern to its immediate left. You might expect 0abc+0 to match '0abc0', '0abcabc0', and so forth, but the pattern immediately to the left of the plus quantifier is c. This means 0abc+0 matches '0abc0', '0abcc0', '0abccc0', and so on.
To match one or more sequences of 'abc' with zeros on the ends, use 0(abc)+0. The parentheses denote a subpattern that can be quantified as a unit. It's also common for regular expression engines to save or "capture" the portion of the input text that matches a parenthesized group. Extracting bits this way is much more flexible and less error-prone than counting indices and substr.
Alternation
Earlier, we saw one way to match either 'Nick' or 'nick'. Another is with alternation as in Nick|nick. Remember that alternation includes everything to its left and everything to its right. Use grouping parentheses to limit the scope of |, e.g., (Nick|nick).
For another example, you could equivalently write [a-c] as a|b|c, but this is likely to be suboptimal because many implementations assume alternatives will have lengths greater than 1.
Escaping
Although some characters match themselves, others have special meanings. The pattern \d+ doesn't match backslash followed by lowercase D followed by a plus sign: to get that, we'd use \\d\+. A backslash removes the special meaning from the following character.
Greediness
Regular expression quantifiers are greedy. This means they match as much text as they possibly can while allowing the entire pattern to match successfully.
For example, say the input is
"Hello," she said, "How are you?"
You might expect ".+" to match only 'Hello,' and will then be surprised when you see that it matched from 'Hello' all the way through 'you?'.
To switch from greedy to what you might think of as cautious, add an extra ? to the quantifier. Now you understand how \((.+?)\), the example from your question works. It matches the sequence of a literal left-parenthesis, followed by one or more characters, and terminated by a right-parenthesis.
If your input is '(123) (456)', then the first capture will be '123'. Non-greedy quantifiers want to allow the rest of the pattern to start matching as soon as possible.
(As to your confusion, I don't know of any regular-expression dialect where ((.+?)) would do the same thing. I suspect something got lost in transmission somewhere along the way.)
Anchors
Use the special pattern ^ to match only at the beginning of your input and $ to match only at the end. Making "bookends" with your patterns where you say, "I know what's at the front and back, but give me everything between" is a useful technique.
Say you want to match comments of the form
-- This is a comment --
you'd write ^--\s+(.+)\s+--$.
Build your own
Regular expressions are recursive, so now that you understand these basic rules, you can combine them however you like.
Tools for writing and debugging regexes:
*
*RegExr (for JavaScript)
*Perl: YAPE: Regex Explain
*Regex Coach (engine backed by CL-PPCRE)
*RegexPal (for JavaScript)
*Regular Expressions Online Tester
*Regex Buddy
*Regex 101 (for PCRE, JavaScript, Python, Golang, Java 8)
*I Hate Regex
*Visual RegExp
*Expresso (for .NET)
*Rubular (for Ruby)
*Regular Expression Library (Predefined Regexes for common scenarios)
*Txt2RE
*Regex Tester (for JavaScript)
*Regex Storm (for .NET)
*Debuggex (visual regex tester and helper)
Books
*
*Mastering Regular Expressions, the 2nd Edition, and the 3rd edition.
*Regular Expressions Cheat Sheet
*Regex Cookbook
*Teach Yourself Regular Expressions
Free resources
*
*RegexOne - Learn with simple, interactive exercises.
*Regular Expressions - Everything you should know (PDF Series)
*Regex Syntax Summary
*How Regexes Work
*JavaScript Regular Expressions
Footnote
†: The statement above that . matches any character is a simplification for pedagogical purposes that is not strictly true. Dot matches any character except newline, "\n", but in practice you rarely expect a pattern such as .+ to cross a newline boundary. Perl regexes have a /s switch and Java Pattern.DOTALL, for example, to make . match any character at all. For languages that don't have such a feature, you can use something like [\s\S] to match "any whitespace or any non-whitespace", in other words anything.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "166"
} |
Q: Using ConfigurationManager to load config from an arbitrary location I'm developing a data access component that will be used in a website that contains a mix of classic ASP and ASP.NET pages, and need a good way to manage its configuration settings.
I'd like to use a custom ConfigurationSection, and for the ASP.NET pages this works great. But when the component is called via COM interop from a classic ASP page, the component isn't running in the context of an ASP.NET request and therefore has no knowledge of web.config.
Is there a way to tell the ConfigurationManager to just load the configuration from an arbitrary path (e.g. ..\web.config if my assembly is in the /bin folder)? If there is then I'm thinking my component can fall back to that if the default ConfigurationManager.GetSection returns null for my custom section.
Any other approaches to this would be welcome!
A: Another solution is to override the default environment configuration file path.
I find it the best solution for the of non-trivial-path configuration file load, specifically the best way to attach configuration file to dll.
AppDomain.CurrentDomain.SetData("APP_CONFIG_FILE", <Full_Path_To_The_Configuration_File>);
Example:
AppDomain.CurrentDomain.SetData("APP_CONFIG_FILE", @"C:\Shared\app.config");
More details may be found at this blog.
Additionally, this other answer has an excellent solution, complete with code to refresh
the app config and an IDisposable object to reset it back to it's original state. With this
solution, you can keep the temporary app config scoped:
using(AppConfig.Change(tempFileName))
{
// tempFileName is used for the app config during this context
}
A: Ishmaeel's answer generally does work, however I found one issue, which is that using OpenMappedMachineConfiguration seems to lose your inherited section groups from machine.config. This means that you can access your own custom sections (which is all the OP wanted), but not the normal system sections. For example, this code will not work:
ConfigurationFileMap fileMap = new ConfigurationFileMap(strConfigPath);
Configuration configuration = ConfigurationManager.OpenMappedMachineConfiguration(fileMap);
MailSettingsSectionGroup thisMail = configuration.GetSectionGroup("system.net/mailSettings") as MailSettingsSectionGroup; // returns null
Basically, if you put a watch on the configuration.SectionGroups, you'll see that system.net is not registered as a SectionGroup, so it's pretty much inaccessible via the normal channels.
There are two ways I found to work around this. The first, which I don't like, is to re-implement the system section groups by copying them from machine.config into your own web.config e.g.
<sectionGroup name="system.net" type="System.Net.Configuration.NetSectionGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
<sectionGroup name="mailSettings" type="System.Net.Configuration.MailSettingsSectionGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
<section name="smtp" type="System.Net.Configuration.SmtpSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
</sectionGroup>
</sectionGroup>
I'm not sure the web application itself will run correctly after that, but you can access the sectionGroups correctly.
The second solution it is instead to open your web.config as an EXE configuration, which is probably closer to its intended function anyway:
ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap() { ExeConfigFilename = strConfigPath };
Configuration configuration = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None);
MailSettingsSectionGroup thisMail = configuration.GetSectionGroup("system.net/mailSettings") as MailSettingsSectionGroup; // returns valid object!
I daresay none of the answers provided here, neither mine or Ishmaeel's, are quite using these functions how the .NET designers intended. But, this seems to work for me.
A: I provided the configuration values to word hosted .nET Compoent as follows.
A .NET Class Library component being called/hosted in MS Word. To provide configuration values to my component, I created winword.exe.config in C:\Program Files\Microsoft Office\OFFICE11 folder. You should be able to read configurations values like You do in Traditional .NET.
string sMsg = System.Configuration.ConfigurationManager.AppSettings["WSURL"];
A: Try this:
System.Configuration.ConfigurationFileMap fileMap = new ConfigurationFileMap(strConfigPath); //Path to your config file
System.Configuration.Configuration configuration = System.Configuration.ConfigurationManager.OpenMappedMachineConfiguration(fileMap);
A: The accepted answer is wrong!!
It throws the following exception on accessing the AppSettings property:
Unable to cast object of type 'System.Configuration.DefaultSection' to type 'System.Configuration.AppSettingsSection'.
Here is the correct solution:
System.Configuration.ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap();
fileMap.ExeConfigFilename = "YourFilePath";
System.Configuration.Configuration configuration = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None);
A: In addition to Ishmaeel's answer, the method OpenMappedMachineConfiguration() will always return a Configuration object. So to check to see if it loaded you should check the HasFile property where true means it came from a file.
A: For ASP.NET use WebConfigurationManager:
var config = WebConfigurationManager.OpenWebConfiguration("~/Sites/" + requestDomain + "/");
(..)
config.AppSettings.Settings["xxxx"].Value;
A: This should do the trick :
AppDomain.CurrentDomain.SetData("APP_CONFIG_FILE", "newAppConfig.config);
Source : https://www.codeproject.com/Articles/616065/Why-Where-and-How-of-NET-Configuration-Files
A: Use XML processing:
var appPath = AppDomain.CurrentDomain.BaseDirectory;
var configPath = Path.Combine(appPath, baseFileName);;
var root = XElement.Load(configPath);
// can call root.Elements(...)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
} |
Q: MOSS SSP Issue - Failed database logons from deleted SSP We've been having some issues with a SharePoint instance in a test
environment. Thankfully this is not production ;) The problems started
when the disk with the SQL Server databases and search index ran out
of space. Following this, the search service would not run and search
settings in the SSP were not accessible. Reclaiming the disk space did
not resolve the issue. So rather than restoring the VM, we decided to
try to fix the issue.
We created a new SSP and changed the association of all services to
the new SSP. The old SSP and it's databases were then deleted. Search
results for PDF files are no longer appearing, but the search works
fine otherwise. MySites also works OK.
Following the implementation of this change, these problems occur:
1) An audit failure message started appearing in the application event log, for 'DOMAIN\SPMOSSSvc' which is the MOSS farm account.
Event Type: Failure Audit
Event Source: MSSQLSERVER
Event Category: (4)
Event ID: 18456
Date: 8/5/2008
Time: 3:55:19 PM
User: DOMAIN\SPMOSSSvc
Computer: dastest01
Description:
Login failed for user 'DOMAIN\SPMOSSSvc'. [CLIENT: <local machine>]
2) SQL Server profiler is showing queries from SharePoint that reference the old
(deleted) SSP database.
So...
*
*Where would these references to DOMAIN\SPMOSSSvc and the old SSP
database exist?
*Is there a way to 'completely' remove the SSP from the server, and
re-create? The option to delete was not available (greyed out) when a
single SSP is in place.
A: As Daniel McPherson said, this is caused when SSPs are deleted but the associated
job are not and attempt to communicate with the deleted database.If the SSP
database has been deleted or a problem occurred when deleting an SSP, the job may
not be deleted. When the job attempts to run, it will fail since the database no
longer exists.
Follow the steps Daniel mentioned:
1. Go to SQL Server Management Studio
2. Disable the job called SSPNAME_JobDeleteExpiredSessions, right click and choose Disable Job.
A: I suspect these are related to the SQL Server Agent trying to login to a database that no longer exists.
To clear it up you need to:
1. Go to SQL Server Management Studio
2. Disable the job called <database name>_job_deleteExpiredSessions
If that works, then you should be all clear to delete it.
A: Have you tried removing the SSP using the command line? I found this worked once when we had a broken an SSP and just wanted to get rid of it.
The command is:
stsadm.exe -o deletessp -title <sspname> [-deletedatabases]
The deletedatbases switch is optional.
Also, check in Central Administration under Job Definitions and Job Schedules to ensure no SSP related jobs are still running
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How much database performance overhead when using LINQ? How much database performance overhead is involved with using C# and LINQ compared to custom optimized queries loaded with mostly low-level C, both with a SQL Server 2008 backend?
I'm specifically thinking here of a case where you have a fairly data-intensive program and will be doing a data refresh or update at least once per screen and will have 50-100 simultaneous users.
A: Thanks Stu. Bottom line seems to be that LINQ to SQL probably doesn't have a significant database performance overhead with the newer versions if you are able to use a compiled select, and the slower functions of updating are likely to be faster unless you have a REALLY sharp expert doing most of the coding.
A: In my experience the overhead is minimal, provided that the person writing the queries knows what he/she is doing, and take the usual precautions to ensure the generated queries are optimal, that the necessary indexes are in place etc etc. In other words, the database impact should be the same; there is a minimal but usually negligible overhead on the app side.
That said... there is one exception to this; if a single query generates multiple aggregates the L2S provider translates it to a large query with one sub-query per aggregate. For a large table this can have a significant I/O impact as the db I/O cost for the query grows by magnitudes for each new aggregate in the query.
The workaround for that is of course to move the aggregates to stored proc or view. Matt Warren has some sample code for an alternative query provider that translate that kind of queries in a more efficient way.
Resources:
https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=334211
http://blogs.msdn.com/mattwar/archive/2008/07/08/linq-building-an-iqueryable-provider-part-x.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: What is called a Node in a WebSpere Network Deployment In a installation of WebSphere Application Server with Network Deployment, a node is:
*
*a physical machine
*an instance of operative system
*a logical set of WAS instances that is independent of physical machine or OS instance
A: Basically,
A server is a runtime environment, a process of execution.
A node is a grouping of servers that share common configuration. It is a physical machine.
A cell is a grouping of nodes into a sigle administrative domain. For websphere, it mean that if you group several servers within a cell, then you can administer them with one Websphere admin console
Hope this helps!
A: @ggasp Here is what I got off IBM's Information Center
A node is a logical grouping of managed servers.
A node usually corresponds to a logical or physical computer system with a distinct IP host address. Nodes cannot span multiple computers.
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/cagt_node.html
A: Keep in mind that usually <> always.
Since WAS 6.0 and up you usually want to setup more than one node in each physical computer, given the usual power of the server you use the node to separate logical business entities.
Like for example have 6 nodes, 3 in each of 2 machines and have 1 pair of nodes you could define 3 different clusters one for each stage (dev, qa, staging) and making each cluster be invisible to the other.
A: A Cell is a virtual unit that is built of a Deployment Manager and one or more Nodes. A Node is another virtual unit that is built of a Node Agent and one or more Server instances.
Here you can find more details including a diagram.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Flex / Air obfuscation I've written (most of) an application in Flex and I am concerned with protecting the source code. I fired up a demo of Trillix swf decompiler and opened up the swf file that was installed to my Program Files directory. I saw that all of the actionscript packages I wrote were there. I'm not too concerned with the packages, even though there is a substantial amount of code, because it still seems pretty unusable without the mxml files. I think they are converted to actionscript, or atleast I hope. However, I would still like to explore obfuscation.
Does anyone have any experience with Flash / Actionscript 3 / Flex obfuscators? Can you recommend a good product?
A: Well, in my opinion, the easiest and safest solution is a mix of maclema and Borek answer:
Obfuscating code can be a big headach if you did not include it in your process from the start and if your aplplication is quite big: it's likely that obfuscation make your application corrupted if you used remote packages (and did not declare this to the obfuscator) if you used to many unTyped variables in Objects or dynamic classes ....
So: if you do maclema's solution on your big application and use obfuscation on your wrapper (which is a small app likely to be very easy to obfuscate) you're code will be the safest and the hasle the least.
Only a very angry pirate would take the time to reverse engineer the obfuscation to then decrypt the package .... Well if someone wants your application code soo bad it's either CIA related or you're already very rich (or both)
thank you all for your answers
A: The procedure suggested by maclema will not really stop any attacker from obtaining the source - the "wrapper application" will need to be unencrypted so the attacker will be able to find out that you use AES (or any other algorithm) and he will obtain the decryption key in a similar way (because it needs to be in plaintext somewhere). Once he has this, he will be able to decrypt your SWF file easily.
The only reliable solution (well...) is some kind of obfuscator - we use Amayeta which works for Flex in the latest version - please see http://www.amayeta.com/software/swfencrypt/ .
A: Here's what I would do.
*
*Compile your application to a SWF file. Then encrypt the SWF using AES.
*Make a "wrapper" application that loads the encrypted SWF into a ByteArray using URLLoader
*Use the as3crypto library to decrypt the swf at runtime.
*Once decrypted, use Loader.loadBytes to load the decrypted swf into the wrapper application.
This will make it a lot harder to get your code. Not impossible, but harder.
For AIR applications you could leave the SWF encrypted when delivering the application to the end-user. Then you could provide a registration key that contains the key used to decrypt the SWF.
Also, here is a link to an AS3 obfuscator. I am not sure how well it works though.
http://www.ambiera.com/irrfuscator/index.html
A: I recently released an iOS and Android game using Flash. I looked around the internet for a good free program to protect the source code in my SWF and couldn't find anything so I wrote one. It's still in development and it's "use at your own risk" but it worked for me.
It's released on github. Check it out and let me know what you think.
https://github.com/Teesquared/flasturbate
I uploaded a windows binary but I recommend you follow the instructions to build it yourself if you want to give it a try.
This obfuscator works directly on the SWF file. It currently only renames symbols but it is built on a framework that could support altering bytecodes in the future.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do you resolve a domain name to an IP address with .NET/C#? How do you resolve a domain name to an IP address with .NET/C#?
A: using System.Net;
foreach (IPAddress address in Dns.GetHostAddresses("www.google.com"))
{
Console.WriteLine(address.ToString());
}
A: Try using the System.Net.Dns class
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: String.indexOf function in C Is there a C library function that will return the index of a character in a string?
So far, all I've found are functions like strstr that will return the found char *, not it's location in the original string.
A: You can use strstr to accomplish what you want. Example:
char *a = "Hello World!";
char *b = strstr(a, "World");
int position = b - a;
printf("the offset is %i\n", position);
This produces the result:
the offset is 6
A: strstr returns a pointer to the found character, so you could use pointer arithmetic: (Note: this code not tested for its ability to compile, it's one step away from pseudocode.)
char * source = "test string"; /* assume source address is */
/* 0x10 for example */
char * found = strstr( source, "in" ); /* should return 0x18 */
if (found != NULL) /* strstr returns NULL if item not found */
{
int index = found - source; /* index is 8 */
/* source[8] gets you "i" */
}
A: I think that
size_t strcspn ( const char * str1, const char * str2 );
is what you want. Here is an example pulled from here:
/* strcspn example */
#include <stdio.h>
#include <string.h>
int main ()
{
char str[] = "fcba73";
char keys[] = "1234567890";
int i;
i = strcspn (str,keys);
printf ("The first number in str is at position %d.\n",i+1);
return 0;
}
A: EDIT: strchr is better only for one char.
Pointer aritmetics says "Hellow!":
char *pos = strchr (myString, '#');
int pos = pos ? pos - myString : -1;
Important: strchr () returns NULL if no string is found
A: If you are not totally tied to pure C and can use string.h there is strchr()
See here
A: Write your own :)
Code from a BSD licensed string processing library for C, called zString
https://github.com/fnoyanisi/zString
int zstring_search_chr(char *token,char s){
if (!token || s=='\0')
return 0;
for (;*token; token++)
if (*token == s)
return 1;
return 0;
}
A: You can write
s="bvbrburbhlkvp";
int index=strstr(&s,"h")-&s;
to find the index of 'h' in the given garble.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: How do I configure eclipse (zend studio 6) to hint and code complete several languages? My dream IDE does full code hints, explains and completes PHP, Javascript, HTML and CSS. I know it exists!
so far, Zend studio 6, under the Eclipse IDE does a great job at hinting PHP, some Javascript and HTML, any way I can expand this?
edit: a bit more information: right now, using zend-6 under eclipse, i type in
<?php
p //(a single letter "p")
and I get a hint tooltip with all the available php functions that begin with "p" (phpinfo(), parse_ini_file(), parse_str(), etc...), each with its own explanation: phpinfo()->"outputs lots of PHP information", the same applies for regular HTML (no explanations however).
However, I get nothing when I do:
<style>
b /* (a single letter "b") */
I'd love it if I could get, from that "b" suggestions for "border", "bottom", etc. The same applies for Javascript.
Any ideas?
A: I think the JavaScript and CSS need to be in separate files for this to work.
Example of CSS autocomplete in Eclipse:
Starting to type border
Then setting thickness
Then choosing the color
Chose red, and it added the ; for me
Works pretty good IMHO.
A: The default CSS and HTML editors for Eclipse are really good. The default javascript editor does an OK job, but it needs a little work.
I just tested this in Eclipse 3.3.2
function test(){
}
te<CTRL+SPACE>
and it completed the method for me as did this:
var test = function(){
};
te<CTRL+SPACE>
Can you expand on what more you wanted it to do?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Drag and Drop to a hosted Browser control I have a WinForms program written on .NET 2 which hosts a webbrowser control and renders asp.net pages from a known server.
I would like to be able to drag, say, a tree node from a treeview in my winforms app into a specific location in the hosted web page and have it trigger a javascript event there.
Currently, I can implement the IDocHostUIHandler interface and getting drag\drop events on the browser control, then call Navigate("javascript:fire_event(...)") on the control to execute a script on the page. However, I want this to work only when I drop data on a specific part of the page.
One solution, I suppose, would be to bite the bullet and write a custom browser plugin in the form of an activex control, embed that in the location I want to drop to and let that implement the needed drag\drop interfaces.
Would that work?
Is there a cleaner approach? Can I take advantage of the fact that the browser control is hosted in my app and provide some further level of interaction?
A: Take a look at the BrowserPlus project at Yahoo.
It looks like they have built a toolkit so that you don't have to do the gritty work of writing the browser plugin yourself.
A: If you can find out the on screen position of the part of the page you are interested in, you could compare this with the position of the mouse when you receive the drop event. I'm not sure how practical this is if you can get the info out of the DOM or whatnot.
As an alternative could you implement the mouse events on the bit of the page using javascript?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: C# and Arrow Keys I am new to C# and am doing some work in an existing application. I have a DirectX viewport that has components in it that I want to be able to position using arrow keys.
Currently I am overriding ProcessCmdKey and catching arrow input and send an OnKeyPress event. This works, but I want to be able to use modifiers(ALT+CTRL+SHIFT). As soon as I am holding a modifier and press an arrow no events are triggered that I am listening to.
Does anyone have any ideas or suggestions on where I should go with this?
A: I upvoted Tokabi's answer, but for comparing keys there is some additional advice on StackOverflow.com here. Here are some functions which I used to help simplify everything.
public Keys UnmodifiedKey(Keys key)
{
return key & Keys.KeyCode;
}
public bool KeyPressed(Keys key, Keys test)
{
return UnmodifiedKey(key) == test;
}
public bool ModifierKeyPressed(Keys key, Keys test)
{
return (key & test) == test;
}
public bool ControlPressed(Keys key)
{
return ModifierKeyPressed(key, Keys.Control);
}
public bool AltPressed(Keys key)
{
return ModifierKeyPressed(key, Keys.Alt);
}
public bool ShiftPressed(Keys key)
{
return ModifierKeyPressed(key, Keys.Shift);
}
protected override bool ProcessCmdKey(ref Message msg, Keys keyData)
{
if (KeyPressed(keyData, Keys.Left) && AltPressed(keyData))
{
int n = code.Text.IndexOfPrev('<', code.SelectionStart);
if (n < 0) return false;
if (ShiftPressed(keyData))
{
code.ExpandSelectionLeftTo(n);
}
else
{
code.SelectionStart = n;
code.SelectionLength = 0;
}
return true;
}
else if (KeyPressed(keyData, Keys.Right) && AltPressed(keyData))
{
if (ShiftPressed(keyData))
{
int n = code.Text.IndexOf('>', code.SelectionEnd() + 1);
if (n < 0) return false;
code.ExpandSelectionRightTo(n + 1);
}
else
{
int n = code.Text.IndexOf('<', code.SelectionStart + 1);
if (n < 0) return false;
code.SelectionStart = n;
code.SelectionLength = 0;
}
return true;
}
return base.ProcessCmdKey(ref msg, keyData);
}
A: Within your overridden ProcessCmdKey how are you determining which key has been pressed?
The value of keyData (the second parameter) will change dependant on the key pressed and any modifier keys, so, for example, pressing the left arrow will return code 37, shift-left will return 65573, ctrl-left 131109 and alt-left 262181.
You can extract the modifiers and the key pressed by ANDing with appropriate enum values:
protected override bool ProcessCmdKey(ref Message msg, Keys keyData)
{
bool shiftPressed = (keyData & Keys.Shift) != 0;
Keys unmodifiedKey = (keyData & Keys.KeyCode);
// rest of code goes here
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Authoritative source on XML-sig We have a question with regards to XML-sig and need detail about the optional elements as well as some of the canonicalization and transform stuff. We're writing a spec for a very small XML-syntax payload that will go into the metadata of media files and it needs to by cryptographically signed. Rather than re-invent the wheel, We thought we should use the XML-sig spec but I think most of it is overkill for what we need, and so we like to have more information/dialogue with people who know the details.
Specifically, do we need to care about either transforms or canonicalization if the XML is very basic with no tabs for formatting and is specific to our needs?
A: Can you let us know that technology you are using as there are some intresting bits out there around this stuff and some short cuts... i.e. WSE2 is complex beast and something that I dont like getting wrong!
I dont like developers doing this and there are WSE2 accelorators out there like SSL Accelorates as the processing of encryption has a hugh cost best to take it out of process from the normal code and the development arena.
If this is an option for you - Try look at this - ForumSystems
A: If the option exists to not do an XML signature and instead just to treat the XML as a byte stream and to sign that, do it. It will be easier to implement, easier to understand, more stable (no canonicalization, transform, policy, ...) and faster.
If you absolutely must have XML DSIG (sadly, some of us must), it is certainly possible these days but there are many, many caveats. You need good library support, with Java this is out of the box in JDK 1.6, I am not familiar with other platforms. You must test interoperability with the receiving end of your signed XML, especially if they are potentially on a different platform.
Be sure to read Why XML Security Is Broken, it basically covers all the ground regarding the horror that is XML Canonicalization and gives some pointers to some alternatives.
A: If you need to sign XML in code, check XMLBlackbox which provides canonicalization and all other transformations for you. XMLBlackbox also supports XAdES.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Why is this regular expression faster? I'm writing a Telnet client of sorts in C# and part of what I have to parse are ANSI/VT100 escape sequences, specifically, just those used for colour and formatting (detailed here).
One method I have is one to find all the codes and remove them, so I can render the text without any formatting if needed:
public static string StripStringFormating(string formattedString)
{
if (rTest.IsMatch(formattedString))
return rTest.Replace(formattedString, string.Empty);
else
return formattedString;
}
I'm new to regular expressions and I was suggested to use this:
static Regex rText = new Regex(@"\e\[[\d;]+m", RegexOptions.Compiled);
However, this failed if the escape code was incomplete due to an error on the server. So then this was suggested, but my friend warned it might be slower (this one also matches another condition (z) that I might come across later):
static Regex rTest =
new Regex(@"(\e(\[([\d;]*[mz]?))?)?", RegexOptions.Compiled);
This not only worked, but was in fact faster to and reduced the impact on my text rendering. Can someone explain to a regexp newbie, why? :)
A: Do you really want to do run the regexp twice? Without having checked (bad me) I would have thought that this would work well:
public static string StripStringFormating(string formattedString)
{
return rTest.Replace(formattedString, string.Empty);
}
If it does, you should see it run ~twice as fast...
A: The reason why #1 is slower is that [\d;]+ is a greedy quantifier. Using +? or *? is going to do lazy quantifing. See MSDN - Quantifiers for more info.
You may want to try:
"(\e\[(\d{1,2};)*?[mz]?)?"
That may be faster for you.
A: Without doing detailed analysis, I'd guess that it's faster because of the question marks. These allow the regular expression to be "lazy," and stop as soon as they have enough to match, rather than checking if the rest of the input matches.
I'm not entirely happy with this answer though, because this mostly applies to question marks after * or +. If I were more familiar with the input, it might make more sense to me.
(Also, for the code formatting, you can select all of your code and press Ctrl+K to have it add the four spaces required.)
A: I'm not sure if this will help with what you are working on, but long ago I wrote a regular expression to parse ANSI graphic files.
(?s)(?:\e\[(?:(\d+);?)*([A-Za-z])(.*?))(?=\e\[|\z)
It will return each code and the text associated with it.
Input string:
<ESC>[1;32mThis is bright green.<ESC>[0m This is the default color.
Results:
[ [1, 32], m, This is bright green.]
[0, m, This is the default color.]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Is a "Confirm Email" input good practice when user changes email address? My organization has a form to allow users to update their email address with us.
It's suggested that we have two input boxes for email: the second as an email confirmation.
I always copy/paste my email address when faced with the confirmation.
I'm assuming most of our users are not so savvy.
Regardless, is this considered a good practice?
I can't stand it personally, but I also realize it probably isn't meant for me.
If someone screws up their email, they can't login, and they must call to sort things out.
A: I agree with you in that it is quite an annoyance to me (I also copy and paste my address into the second input).
That being said, for less savvy users, it is probably a good idea. Watching my mother type is affirmation that many users do not look at the screen when they type (when she's using her laptop she resembles Linus from Peanuts when he's playing the piano). If it's important for you to have the user's correct email address then I would say having a confirmation input is a very good idea (one of these days I'll probably type my email address wrong in the first box and paste it wrong into the second box and then feel like a complete idiot).
A: While the more tech-savvy people tend to copy and paste, not technical people find it just as annoying to have to type something twice. During a lot of user testing I've down, the less tech-savvy - the more annoyed they seem with something like this... They struggle to type as it is, when they see they have to type their email in again it's usually greeted with a strong sign.
I would suggested a few things.
*
*Next to the input box write the style of the information you are looking for so something like (i.e. user@domain.com). The reason this is important is you would be surprised how many of the less tech-savvy don't really understand the different between a website and an email address, so let them know visually the format you want.
*Run strong formatting test in real time, and visually show a user that the format is good or bad. A green check box if everything is okay comes to mind.
*Lastly, depending on your system architecture I often use a library to actually wrong a domain in the background. I don't necessarily try to run a VRFY on the server - I often use a library to check to make sure the domain they entered has MX records in it's DNS record.
A: I agree with Justin, while most technical folks will use the copy, paste method, for the less savvy users it is a good practice.
One more thing that I would add is that the second field should have the auto-complete feature disabled. This ensures that there is human input from either method on at least one of the fields.
A: Typing things twice is frustrating and doesn't prevent copy&paste errors or even some typos.
I would use an authenticate/activate schema with a roll back to the old address if the activation is not met within 48 hours or if the email bounces.
A: As long as a field is viewable, you do not need a confirm box. As long as you do some form validation to be sure that it is at least in valid format for an email address let the user manage the rest of the issues.
A: I've seen plenty of people type their email address wrong and I've also looked through user databases full of invalid email address.
The way I see it you've got two options. Use a second box to confirm the input, or send an authentication/activation email.
Both are annoyances so you get to choose which you think will annoy your users less.
Most would argue that having to find an email and click on a link is more annoying, but it avoids the copy/paste a bad address issue, and it allows you to do things like delete or roll back users if they don't activate after say 48 hours.
A: I would just use one input box. The "Confirm" input is a remnant form the "Confirm Password" method.
With passwords, this is useful because they are usually typed as little circles. So, you can't just look at it to make sure that you typed it correctly.
With a regular text box, you can visually check your input. So, there is no need for a confirmation input box.
A: I'd say that this is ok but should only be reserved for forms where the email is essential. If you mistype your email for your flight booking then you have severed the two-way link between yourself and the other party and risk not getting the confirmation number, here on StackOverflow it would only mean your Gravatar would not be loaded ...
I'd consider myself fairly techie but I always fill in both fields /wo cut-paste if I regard it to be important enough.
A: I tend to have it send a verification code to the email address specified (and only ask for it once), and not change the email address until the user has entered the code I sent them.
This has the advantage that if they try to set it to a dozen different addresses in quick succession, you'll know which ones work by which verification code they put in.
Plus, if I am presented with a "confirm email address" box, I just copy and paste from the previous one, and if I'm guilty of that, I'm sure that other less careful users will do the same.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: How to keyboard down or up between dropdown "options"? I have a custom built ajax [div] based dynamic dropdown.
I have an [input] box which; onkeyup, runs an Ajax search which returns results in divs and are drawn back in using innerHTML. These divs all have highlights onmouseover so, a typical successful search yields the following structure (pardon the semi-code):
[input]
[div id=results] //this gets overwritten contantly by my AJAX function
[div id=result1 onmouseover=highlight onclick=input.value=result1]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[/div]
It works.
However, I'm missing the important functions behind regular HTML elements. I can't keyboard down or up between "options".
I know javascript handles keyboard events but; I haven't been able to find a good guide. (Of course, the follow-up question will end up being: can I use <ENTER> to trigger that onclick event?)
A: What you need to do is attach event listeners to the div with id="results". You can do this by adding onkeyup, onkeydown, etc. attributes to the div when you create it or you can attach these using JavaScript.
My recommendation would be that you use an AJAX library like YUI, jQuery, Prototype, etc. for two reasons:
*
*It sounds like you are trying to create an Auto Complete control which is something most AJAX libaries should provide. If you can use an existing component you'll save yourself a lot of time.
*Even if you don't want to use the control provided by a library, all libraries provide event libraries that help to hide the differences between the event APIs provided by different browsers.
Forget addEvent, use Yahoo!’s Event Utility provides a good summary of what an event library should provide for you. I'm pretty sure that the event libraries provided by jQuery, Prototype, et. al. provide similar features.
If that article goes over your head have a look at this documentation first and then re-read the original article (I found the article made much more sense after I'd used the event library).
A couple of other things:
*
*Using JavaScript gives you much more control than writing onkeyup etc. attributes into your HTML. Unless you want to do something really simple I would use JavaScript.
*If you write your own code to handle keyboard events a good key code reference is really handy.
A: Off the top of my head, I would think that you'd need to maintain some form of a data structure in the JavaScript that reflects the items in the current dropdown list. You'd also need a reference to the currently active/selected item.
Each time keyup or keydown is fired, update the reference to the active/selected item in the data structure. To provide highlighting information on the UI, add or remove a class name that is styled via CSS based on if the item is active/selected or not.
Also, this isn't a biggy, but innerHTML is not really standard (look into createTextNode(), createElement(), and appendChild() for standard ways of creating data). You may also want to see about attaching event handlers in the JavaScript rather than doing so in an HTML attribute.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: What point should someone decide to switch Database Systems When developing whether its Web or Desktop at which point should a developer switch from SQLite, MySQL, MS SQL, etc
A: It depends on what you are doing. You might switch if:
*
*You need more scalability or better performance - say from SQLite to SQL Server or Oracle.
*You need access to more specific datatypes.
*You need to support a customer that only runs a particular database.
*You need better DBA tools.
*Your application is using a different platform where your database no longer runs, or it's libraries do not run.
*You have the ability/time/budget to actually make the change. Depending on the situation, the migration could be a bigger project than everything in the project up to that point. Migrations like these are great places to introduce inconsistencies, or to lose data, so a lot of care is required.
There are many more reasons for switching and it all depends on your requirements and the attributes of the databases.
A: You should switch databases at milestone 2.3433, 3ps prior to the left branch of dendrite 8,151,215.
You should switch databases when you have a reason to do so, would be my advice. If your existing database is performing to your expectations, supports the load that is being placed on it by your production systems, has the features you require in your applications and you aren't bored with it, why change? However, if you find your application isn't scaling, or you are designing an application that has high load or scalability requirements and your research tells you your current database platform is weak in that area, or, as was already mentioned, you need some spatial analysis or feature that a particular database has, well there you go.
Another consideration might be taking up the use of a database agnostic ORM tool that can allow you to experiment freely with different database platforms with a simple configuration setting. That was the trigger for us to consider trying out something new in the DB department. If our application can handle any DB the ORM can handle, why pay licensing fees on a commercial database when an open source DB works just as well for the levels of performance we require?
The bottom line, though, is that with databases or any other technology, I think there are no "business rules" that will tell you when it is time to switch - your scenario will tell you it is time to switch because something in your solution won't be quite right, and if you aren't at that point, no need to change.
A: BrianLy hit the nail on the head, but I'd also add that you may end up using different databases at different levels of development. It's not uncommon for developers to use SQLite on their workstation when they're coding against their personal development server, and then have the staging and/or production sites using a different database tool.
Of course, if you're using extensions or capabilities specific to a certain database tool (say, PostGIS in PostGreSQL), then obviously that wouldn't work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Should I use the username, or the user's ID to reference authenticated users in ASP.NET So in my simple learning website, I use the built in ASP.NET authentication system.
I am adding now a user table to save stuff like his zip, DOB etc. My question is:
*
*In the new table, should the key be the user name (the string) or the user ID which is that GUID looking number they use in the asp_ tables.
*If the best practice is to use that ugly guid, does anyone know how to get it? it seems to not be accessible as easily as the name (System.Web.HttpContext.Current.User.Identity.Name)
*If you suggest I use neither (not the guid nor the userName fields provided by ASP.NET authentication) then how do I do it with ASP.NET authentication? One option I like is to use the email address of the user as login, but how to I make ASP.NET authentication system use an email address instead of a user name? (or there is nothing to do there, it is just me deciding I "know" userName is actually an email address?
Please note:
*
*I am not asking on how get a GUID in .NET, I am just referring to the userID column in the asp_ tables as guid.
*The user name is unique in ASP.NET authentication.
A: I would use a userid. If you want to use an user name, you are going to make the "change the username" feature very expensive.
A: I would say use the UserID so Usernames can still be changed without affecting the primary key. I would also set the username column to be unique to stop duplicate usernames.
If you'll mainly be searching on username rather than UserID then make Username a clustered index and set the Primary key to be non clustered. This will give you the fastest access when searching for usernames, if however you will be mainly searching for UserIds then leave this as the clustered index.
Edit : This will also fit better with the current ASP.Net membership tables as they also use the UserID as the primary key.
A: You should use some unique ID, either the GUID you mention or some other auto generated key. However, this number should never be visible to the user.
A huge benefit of this is that all your code can work on the user ID, but the user's name is not really tied to it. Then, the user can change their name (which I've found useful on sites). This is especially useful if you use email address as the user's login... which is very convenient for users (then they don't have to remember 20 IDs in case their common user ID is a popular one).
A: I agree with Palmsey,
Though there seems to be a little error in his code:
Guid UserID = new Guid(Membership.GetUser(User.Identity.Name)).ProviderUserKey.ToString());
should be
Guid UserID = new Guid(Membership.GetUser(User.Identity.Name).ProviderUserKey.ToString());
A: This is old but I just want people who find this to note a few things:
*
*The aspnet membership database IS optimized when it comes to accessing user records. The clustered index seek (optimal) in sql server is used when a record is searched for using loweredusername and applicationid. This makes a lot of sense as we only have the supplied username to go on when the user first sends their credentials.
*The guid userid will give a larger index size than an int but this is not really significant because we often only retrieve 1 record (user) at a time and in terms of fragmentation, the number of reads usually greately outweighs the number of writes and edits to a users table - people simply don't update that info all that often.
*the regsql script that creates the aspnet membership tables can be edited so that instead of using NEWID as the default for userid, it can use NEWSEQUENTIALID() which delivers better performance (I have profiled this).
*Profile. Someone creating a "new learning website" should not try to reinvent the wheel. One of the websites I have worked for used an out of the box version of the aspnet membership tables (excluding the horrible profile system) and the users table contained nearly 2 million user records. Even with such a high number of records, selects were still fast because, as I said to begin with, the database indexes focus on loweredusername+applicationid to peform clustered index seek for these records and generally speaking, if sql is doing a clustered index seek to find 1 record, you don't have any problems, even with huge numbers of records provided that you dont add columns to the tables and start pulling back too much data.
*Worrying about a guid in this system, to me, based on actual performance and experience of the system, is premature optimization. If you have an int for your userid but the system performs sub-optimal queries because of your custom index design etc. the system won't scale well. The Microsoft guys did a generally good job with the aspnet membership db and there are many more productive things to focus on than changing userId to int.
A: You should use the UserID.
It's the ProviderUserKey property of MembershipUser.
Guid UserID = new Guid(Membership.GetUser(User.Identity.Name).ProviderUserKey.ToString());
A: I would use an auto incrementing number usually an int.
You want to keep the size of the key as small as possible. This keeps your index small and benefits any foreign keys as well. Additonally you are not tightly coupling the data design to external user data (this holds true for the aspnet GUID as well).
Generally GUIDs don't make good primary keys as they are large and inserts can happen at potentially any data page within the table rather than at the last data page. The main exception to this is if you are running mutilple replicated databases. GUIDs are very useful for keys in this scenario, but I am guessing you only have one database so this is not a problem.
A: I would suggest using the username as the primary key in the table if the username is going to be unique, there are a few good reasons to do this:
*
*The primary key will be a clustered index and thus search for a users details via their username will be very quick.
*It will stop duplicate usernames from appearing
*You don't have to worry about using two different peices of information (username or guid)
*It will make writing code much easier because of not having to lookup two bits of information.
A: If you're going to be using LinqToSql for development, I would recommend using an Int as a primary key. I've had many issues when I had relationships built off of non-Int fields, even when the nvarchar(x) field had constraints to make it a unique field.
I'm not sure if this is a known bug in LinqToSql or what, but I've had issues with it on a current project and I had to swap out PKs and FKs on several tables.
A: I agree with Mike Stone. I would also suggest only using a GUID in the event you are going to be tracking an enormous amount of data. Otherwise, a simple auto incrementing integer (Id) column will suffice.
If you do need the GUID, .NET is lovely enough that you can get one by a simple...
Dim guidProduct As Guid = Guid.NewGuid()
or
Guid guidProduct = Guid.NewGuid();
A: I'm agreeing with Mike Stone also. My company recently implemented a new user table for outside clients (as opposed to internal users who authenticate through LDAP). For the external users, we chose to store the GUID as the primary key, and store the username as varchar with unique constraints on the username field.
Also, if you are going to store the password field, I highly recommend storing the password as a salted, hashed binary in the database. This way, if someone were to hack your database, they would not have access to your customer's passwords.
A: I would use the guid in my code and as already mentioned an email address as username. It is, after all, already unique and memorable for the user. Maybe even ditch the guid (v. debateable).
Someone mentioned using a clustered index on the GUID if this was being used in your code. I would avoid this, especially if INSERTs are high; the index will be rebuilt every time you INSERT a record. Clustered indexes work well on auto increment IDs though because new records are appended only.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: How to make a button appear as if it is pressed? Using VS2008, C#, .Net 2 and Winforms how can I make a regular Button look "pressed"?
Imagine this button is an on/off switch.
ToolStripButton has the Checked property, but the regular Button does not.
A: One method you can used to obtain this option is by placing a "CheckBox" object and changing its "Appearance" from "Normal" to "Button" this will give you the same functionality that I believe you are looking for.
A: You could probably also use the ControlPaint class for this.
A: I think you may need a ToggleButton. You can take a look at third party vendors of WinForms components such as Telerik, DevExpress, ComponentFactory, ViBlend which provide such control. They all provide toggle buttons.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Is this really widening vs autoboxing? I saw this in an answer to another question, in reference to shortcomings of the Java spec:
There are more shortcomings and this is a subtle topic. Check this out:
public class methodOverloading{
public static void hello(Integer x){
System.out.println("Integer");
}
public static void hello(long x){
System.out.println("long");
}
public static void main(String[] args){
int i = 5;
hello(i);
}
}
Here "long" would be printed (haven't checked it myself), because the compiler chooses widening over auto-boxing. Be careful when using auto-boxing or don't use it at all!
Are we sure that this is actually an example of widening instead of autoboxing, or is it something else entirely?
On my initial scanning, I would agree with the statement that the output would be "long" on the basis of i being declared as a primitive and not an object. However, if you changed
hello(long x)
to
hello(Long x)
the output would print "Integer"
What's really going on here? I know nothing about the compilers/bytecode interpreters for java...
A: Yes it is, try it out in a test. You will see "long" printed. It is widening because Java will choose to widen the int into a long before it chooses to autobox it to an Integer, so the hello(long) method is chosen to be called.
Edit: the original post being referenced.
Further Edit: The reason the second option would print Integer is because there is no "widening" into a larger primitive as an option, so it MUST box it up, thus Integer is the only option. Furthermore, java will only autobox to the original type, so it would give a compiler error if you leave the hello(Long) and removed hello(Integer).
A: Another interesting thing with this example is the method overloading. The combination of type widening and method overloading only working because the compiler has to make a decision of which method to choose. Consider the following example:
public static void hello(Collection x){
System.out.println("Collection");
}
public static void hello(List x){
System.out.println("List");
}
public static void main(String[] args){
Collection col = new ArrayList();
hello(col);
}
It doesn't use the run-time type which is List, it uses the compile-time type which is Collection and thus prints "Collection".
I encourage your to read Effective Java, which opened my eyes to some corner cases of the JLS.
A: In the first case, you have a widening conversion happening. This can be see when runinng the "javap" utility program (included w/ the JDK), on the compiled class:
public static void main(java.lang.String[]);
Code:
0: iconst_ 5
1: istore_ 1
2: iload_ 1
3: i2l
4: invokestatic #6; //Method hello:(J)V
7: return
}
Clearly, you see the I2L, which is the mnemonic for the widening Integer-To-Long bytecode instruction. See reference here.
And in the other case, replacing the "long x" with the object "Long x" signature, you'll have this code in the main method:
public static void main(java.lang.String[]);
Code:
0: iconst_ 5
1: istore_ 1
2: iload_ 1
3: invokestatic #6; //Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
6: invokestatic #7; //Method hello:(Ljava/lang/Integer;)V
9: return
}
So you see the compiler has created the instruction Integer.valueOf(int), to box the primitive inside the wrapper.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Wrapping lists into columns I'm using ColdFusion to populate a template that includes HTML unordered lists (<ul>s).
Most of these aren't that long, but a few have ridiculously long lengths and could really stand to be in 2-3 columns.
Is there an HTML, ColdFusion or perhaps JavaScript (I'm accepting jQuery solutions) way to do this easily? It's not worth some over-complicated heavyweight solution to save some scrolling.
A: I've done this with jQuery - it's cross-platform and a minimum of code.
Select the UL, clone it, and insert it after the previous UL. Something like:
$("ul#listname").clone().attr("id","listname2").after()
This will insert a copy of your list after the previous one. If the original list is styled with a float:left, they should appear side by side.
Then you can delete the even items from the left-hand list and the odd items from the right hand list.
$("ul#listname li:even").remove();
$("ul#listname2 li:odd").remove();
Now you have a left to right two column list.
To do more columns you'll want to use .slice(begin,end) and/or the :nth-child selector.
ie, for 21 LIs you could .slice(8,14) to create a new UL inserted after your original UL, then select the original UL and delete the li's selected with ul :gt(8).
Try the Bibeault/Katz book on jQuery it's a great resource.
A: The following JavaScript code works only in Spidermonkey and Rhino, and it operates on E4X nodes--i.e., this is useful only for server-side JavaScript, but it might give someone a starting point for doing a jQuery version. (It's been very useful to me on the server side, but I haven't needed it on the client badly enough to actually build it.)
function columns(x,num) {
num || (num = 2);
x.normalize();
var cols, i, j, col, used, left, len, islist;
used = left = 0;
cols = <div class={'columns cols'+num}></div>;
if((left = x.length())==1)
left = x.children().length();
else
islist = true;
for(i=0; i<num; i++) {
len = Math.ceil(left/(num-i));
col = islist ? new XMLList
: <{x.name()}></{x.name()}>;
if(!islist && x['@class'].toString())
col['@class'] = x['@class'];
for(j=used; j<len+used; j++)
islist ? (col += x[j].copy())
: (col.appendChild(x.child(j).copy()));
used += len;
left -= len;
cols.appendChild(<div class={'column'+(i==(num-1) ? 'collast' : '')}>{col}</div>);
}
return cols;
}
You call it like columns(listNode,2) for two columns, and it turns:
<ul class="foo">
<li>a</li>
<li>b</li>
<li>c</li>
</ul>
into:
<div class="columns cols2">
<div class="column">
<ul class="foo">
<li>a</li>
<li>b</li>
</ul>
</div>
<div class="column collast">
<ul class="foo">
<li>c</li>
</ul>
</div>
</div>
It's meant to be used with CSS like this:
div.columns {
overflow: hidden;
_zoom: 1;
}
div.columns div.column {
float: left;
}
div.cols2 div.column {
width: 47.2%;
padding: 0 5% 0 0;
}
div.cols3 div.column {
width: 29.8%;
padding: 0 5% 0 0;
}
div.cols4 div.column {
width: 21.1%;
padding: 0 5% 0 0;
}
div.cols5 div.column {
width: 15.9%;
padding: 0 5% 0 0;
}
div.columns div.collast {
padding: 0;
}
A: The thing that most people are forgetting is that when floating <li/> items, all of the items have to be the same height, or the columns start getting out of whack.
Since you're using a server side language, my recommendation would be to use CF to split the list into 3 arrays. Then you can use an outer ul to wrap the 3 inner ul like so:
<cfset thelist = "1,2,3,4,5,6,7,8,9,10,11,12,13">
<cfset container = []>
<cfset container[1] = []>
<cfset container[2] = []>
<cfset container[3] = []>
<cfloop list="#thelist#" index="i">
<cfif i mod 3 eq 0>
<cfset arrayappend(container[3], i)>
<cfelseif i mod 2 eq 0>
<cfset arrayappend(container[2], i)>
<cfelse>
<cfset arrayappend(container[1], i)>
</cfif>
</cfloop>
<style type="text/css">
ul li { float: left; }
ul li ul li { clear: left; }
</style>
<cfoutput>
<ul>
<cfloop from="1" to="3" index="a">
<li>
<ul>
<cfloop array="#container[a]#" index="i">
<li>#i#</li>
</cfloop>
</ul>
</li>
</cfloop>
</ul>
</cfoutput>
A: Here is a variation on Thumbkin's example (using Jquery):
var $cat_list = $('ul#catList'); // UL with all list items.
var $cat_flow = $('div#catFlow'); // Target div.
var $cat_list_clone = $cat_list.clone(); // Clone the list.
$('li:odd', $cat_list).remove(); // Remove odd list items.
$('li:even', $cat_list_clone).remove(); // Remove even list items.
$cat_flow.append($cat_list_clone); // Append the duplicate to the target div.
Thanks Thumbkin!
A: Using a modulo operation, you can quickly split your list into multiple lists by inserting a </ul><ul> during your loop.
<cfset numberOfColumns = 3 />
<cfset numberOfEntries = 34 />
<ul style="float:left;">
<cfloop from="1" to="#numberOfEntries#" index="i">
<li>#i#</li>
<cfif NOT i MOD ceiling(numberOfEntries / numberOfColumns)>
</ul>
<ul style="float:left;">
</cfif>
</cfloop>
</ul>
Use ceiling() instead of round() to ensure that you don't have extra values at the end of the list and that the last column is shortest.
A: Flexbox can be used to wrap items in both row and column directions.
The main idea is to set the flex-direction on the container to either row or column.
NB: Nowadays browser support is pretty good.
FIDDLE
(Sample markup taken from this old 'list apart' article)
ol {
display: flex;
flex-flow: column wrap; /* flex-direction: column */
height: 100px; /* need to specify height :-( */
}
ol ~ ol {
flex-flow: row wrap; /* flex-direction: row */
max-height: auto; /* override max-height of the column direction */
}
li {
width: 150px;
}
a {
display: inline-block;
padding-right: 35px;
}
<p>items in column direction</p>
<ol>
<li><a href="#">Aloe</a>
</li>
<li><a href="#">Bergamot</a>
</li>
<li><a href="#">Calendula</a>
</li>
<li><a href="#">Damiana</a>
</li>
<li><a href="#">Elderflower</a>
</li>
<li><a href="#">Feverfew</a>
</li>
<li><a href="#">Ginger</a>
</li>
<li><a href="#">Hops</a>
</li>
<li><a href="#">Iris</a>
</li>
<li><a href="#">Juniper</a>
</li>
<li><a href="#">Kava kava</a>
</li>
<li><a href="#">Lavender</a>
</li>
<li><a href="#">Marjoram</a>
</li>
<li><a href="#">Nutmeg</a>
</li>
<li><a href="#">Oregano</a>
</li>
<li><a href="#">Pennyroyal</a>
</li>
</ol>
<hr/>
<p>items in row direction</p>
<ol>
<li><a href="#">Aloe</a>
</li>
<li><a href="#">Bergamot</a>
</li>
<li><a href="#">Calendula</a>
</li>
<li><a href="#">Damiana</a>
</li>
<li><a href="#">Elderflower</a>
</li>
<li><a href="#">Feverfew</a>
</li>
<li><a href="#">Ginger</a>
</li>
<li><a href="#">Hops</a>
</li>
<li><a href="#">Iris</a>
</li>
<li><a href="#">Juniper</a>
</li>
<li><a href="#">Kava kava</a>
</li>
<li><a href="#">Lavender</a>
</li>
<li><a href="#">Marjoram</a>
</li>
<li><a href="#">Nutmeg</a>
</li>
<li><a href="#">Oregano</a>
</li>
<li><a href="#">Pennyroyal</a>
</li>
</ol>
A: So I dug up this article from A List Apart CSS Swag: Multi-Column Lists. I ended up using the first solution, it's not the best but the others require either using complex HTML that can't be generated dynamically, or creating a lot of custom classes, which could be done but would require loads of in-line styling and possibly a huge page.
Other solutions are still welcome though.
A: To output the list into multiple grouped tag you can loop in this fashion.
<cfset list="1,2,3,4,5,6,7,8,9,10,11,12,13,14">
<cfset numberOfColumns = "3">
<cfoutput>
<cfloop from="1" to="#numberOfColumns#" index="col">
<ul>
<cfloop from="#col#" to="#listLen(list)#" index="i" step="#numberOfColumns#">
<li>#listGetAt(list,i)#</li>
</cfloop>
</ul>
</cfloop>
</cfoutput>
A: Here is another solution that allows for columned lists in the following style:
1. 4. 7. 10.
2. 5. 8. 11.
3. 6. 9. 12.
(but it's pure javascript, and requires jQuery, with no fallback)
The following contains a some code that modifies the Array prototype to give a new function called 'chunk' that breaks any given Array into chunks of a given size. Next is a function called 'buildColumns' that takes a UL selector string and a number used to designate how many rows your columns may contain. (Here is a working JSFiddle)
$(document).ready(function(){
Array.prototype.chunk = function(chunk_size){
var array = this,
new_array = [],
chunk_size = chunk_size,
i,
length;
for(i = 0, length = array.length; i < length; i += chunk_size){
new_array.push(array.slice(i, i + chunk_size));
}
return new_array;
}
function buildColumns(list, row_limit) {
var list_items = $(list).find('li').map(function(){return this;}).get(),
row_limit = row_limit,
columnized_list_items = list_items.chunk(row_limit);
$(columnized_list_items).each(function(i){
if (i != 0){
var item_width = $(this).outerWidth(),
item_height = $(this).outerHeight(),
top_margin = -((item_height * row_limit) + (parseInt($(this).css('margin-top')) * row_limit)),
left_margin = (item_width * i) + (parseInt($(this).css('margin-left')) * (i + 1));
$(this[0]).css('margin-top', top_margin);
$(this).css('margin-left', left_margin);
}
});
}
buildColumns('ul#some_list', 5);
});
A: Since I had the same problem and couldn't find anything "clean" I thought I'd posted my solution. In this example I use a reversed while loop so I can use splice instead of slice. The advantage now is splice() only needs an index and a range where slice() needs an index and the total. The latter tends to become difficult while looping.
Disadvantage is I need to reverse the stack while appending.
Example:
cols = 4;
liCount = 35
for loop with slice = [0, 9]; [9, 18]; [18, 27]; [27, 35]
reversed while with splice = [27, 8]; [18, 9]; [9, 9]; [0, 9]
Code:
// @param (list): a jquery ul object
// @param (cols): amount of requested columns
function multiColumn (list, cols) {
var children = list.children(),
target = list.parent(),
liCount = children.length,
newUl = $("<ul />").addClass(list.prop("class")),
newItems,
avg = Math.floor(liCount / cols),
rest = liCount % cols,
take,
stack = [];
while (cols--) {
take = rest > cols ? (avg + 1) : avg;
liCount -= take;
newItems = children.splice(liCount, take);
stack.push(newUl.clone().append(newItems));
}
target.append(stack.reverse());
list.remove();
}
A: You can try this to convert in cols.
CSS:
ul.col {
width:50%;
float:left;
}
div.clr {
clear:both;
}
Html Part :
<ul class="col">
<li>Number 1</li>
<li>Number 2</li>
<li>Number 19</li>
<li>Number 20</li>
</ul>
<ul class="col">
<li>Number 21</li>
<li>Number 22</li>
<li>Number 39</li>
<li>Number 40</li>
</ul>
A: If Safari and Firefox support is good enough for you, there is a CSS solution:
ul {
-webkit-column-count: 3;
-moz-column-count: 3;
column-count: 3;
-webkit-column-gap: 2em;
-moz-column-gap: 2em;
column-gap: 2em;
}
I'm not sure about Opera.
A: There is no pure CSS/HTML way to achieve this, as far as I know. Your best bet would be to do it in pre-processing (if list length > 150, split into 3 columns, else if > 70, split into 2 columns, else 1).
The other option, using JavaScript (I'm not familiar with the jQuery library specifically) would be to iterate through lists, probably based on them being a certain class, count the number of children, and if it is a high enough number, dynamically create a new list after the first, transferring some number of list items to the new list. As far as implementing the columns, you could probably float them left, followed by an element that had the style clear: left or clear: both.
.column {
float: left;
width: 50%;
}
.clear {
clear: both;
}
<ul class="column">
<li>Item 1</li>
<li>Item 2</li>
<!-- ... -->
<li>Item 49</li>
<li>Item 50</li>
</ul>
<ul class="column">
<li>Item 51</li>
<li>Item 52</li>
<!-- ... -->
<li>Item 99</li>
<li>Item 100</li>
</ul>
<div class="clear">
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68"
} |
Q: How to reference to multiple version assembly I'm developing a Sharepoint application and use .NET AjaxControlToolkit library, we are adding a custom aspx page to the Sharepoint. Sharepoint 2007 run in quirks mode so I've made some modification to the AJAX library to make it behave like it normally should. The problem is, the other team already uses the AJAX library and it is a different version with mine. This caused conflict because there could be only one dll in the bin folder with the same name.
From what I know, .NET should be able to handle this situation easily. I've tried using a strong name and GAC to solve it, but it still refers to the dll in the bin folder. If there is no AjaxControlToolkit.dll in the bin folder, the application will simply fail to load the assembly.
If I use complete assembly information on my like this
<%@
Register
tagprefix="AjaxControlToolkit"
namespace="AjaxControlToolkit"
assembly="AjaxControlToolkit, Version=1.0.299.18064,
PublicKeyToken=12345678abcdefgh,
Culture=neutral"
%>
It gives me Compiler Error CS0433
Can someone help me on how to use multiple version of assembly in an application?
A: Well the link for Compiler Error CS0433 makes it pretty clear that the core issue is not with multiple versions of the assembly being referenced - but with namespace + typename conflicts.
When you load up / reference a type - the compiler can't resolve which DLL to load that type from. If Sharepoint is going to load both your DLLs versions (as you say it needs to) - this error will always come.
Simplest fix would be to change the namespaces in the new DLL, since it does have your custom tweaks, and you control the code - mark it clearly as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: LINQ to SQL strings to enums LINQ to SQL allows table mappings to automatically convert back and forth to Enums by specifying the type for the column - this works for strings or integers.
Is there a way to make the conversion case insensitive or add a custom mapping class or extenstion method into the mix so that I can specify what the string should look like in more detail.
Reasons for doing so might be in order to supply a nicer naming convention inside some new funky C# code in a system where the data schema is already set (and is being relied upon by some legacy apps) so the actual text in the database can't be changed.
A: You can always add a partial class with the same name as your LinqToSql class, and then define your own parameters and functions. These will then be accessible as object parameters and methods for this object, the same way as the auto-generated LinqToSql methods are accessible.
Example: You have a LinqToSql class named Car which maps to the Car table in the DB. You can then add a file to App_Code with the following code in it:
public partial class Car {
// Add properties and methods to extend the functionality of Car
}
I am not sure if this totally meets your requirement of changing the way that Enums are mapped into a column. However, you could add a parameter where the get/set properties will work to map the enums that you need while keeping things case-insensitive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to sell Python to a client/boss/person When asked to create system XYZ and you ask to do it in Python over PHP or Ruby, what are the main features you can mention when they require you to explain it?
A: The best sell of Python I've ever seen was by a manager in our group who had a young daughter. He used a quote attributed to Einstein:
If you can't explain something to a six-year-old, you really don't understand it yourself.
The next few slides of his presentation demonstrated how he was able to teach his young daughter some basic Python in less than 30 minutes, with examples of the code she wrote and an explanation of what it did.
He ended the presentation with a picture of his daughter and her quote "Programming is fun!"
I would focus on Python's user friendliness and wealth of libraries and frameworks. There are also a lot of little libraries that you might not get in other languages, and would have to write yourself (i.e. How a C++ developer writes Python).
Good luck!
A: It's one of the preferred languages over at Google - It's several years ahead of Ruby in terms of "maturity" (what ever that really means - but managers like that). Since it's prefered by Google you can also run it on the Google App Engine.
Mircosoft is also embracing Python, and will have a v2.0 of IronPython coming out shortly. They are working on a Ruby implementation as well, but the Python version is way ahead, and is actually "ready for primetime". That give you the possibility for easy integration with .NET code, as well as being able to write client side RIAs in Python when Silverlight 2 ships.
A: Focus on the shorter time needed for development/prototype and possibly easier maintenance (none of this may apply against Ruby).
A: I would consider that using python on a new project is completely dependent on what problem you are trying to solve with python. If you want someone to agree with you that you should use python, then show them how python's features apply specifically to that problem.
In the case of web development with python, talk about WSGI and other web libraries and frameworks you could use that would make your life easier. One note for python is that most of the frameworks for python web development can be plugged right into any current project. With ruby on rails, you're practically working in a DSL that anyone who uses your project will have to learn. If they know python, then they can figure out what you are doing with django, etc in a day.
I'm only talking about web development because it appears that's what you are going to be working on seeing ruby, python and PHP in the same list. The real message that's important is applying to whatever it is you like about python directly to some problem you are trying to solve.
A: This is one of those cases that really boil down to personal preference or situational details. If you're more comfortable and experienced with Python, then say so. Are they asking you to justify it because they're more comfortable with one of the other environments? After you're done, will the system be passed off to someone else for long-term maintenance?
If they ask you to use a technology or language that you're not as familiar with, then make sure they know up-front that it's going to take you longer.
A: Give them a snippet of code in each (no more than a page) that performs some cool function that they will like. (e.g show outliers in a data set).
Show them each page. One in PHP, Ruby and Python.
Ask them which they find easiest to understand/read.
Tell them thats why you want to use Python. It's easier to read if you've not written it, more manageable, less buggy and quicker to build features because it is the most elegant (pythonic)
A: I agree with mreggen. Tell them by working in Python you can get things done faster. Getting things done faster possibly means money saved by the client. In the least it means that you are working with a language you a more comfortable in, meaning faster development, debugging, and refactoring time. There will be less time spent looking up documentation on what function to use to find the length of a string, etc.
A: Though All 3 languages are versatile and used worldwide by programmers, Python still have some advantages over the other two. Like From my personal experience :-
*
*Non-programmers love it (most of 'em choose Python as their first computer language,check this infographic php vs python vs ruby here)
*Multiple frameworks (You can automate your system tasks, can develop apps for web and windows/mac/android OSes)
*Making OpenCV apps easily than MATLAB
*Testing done easy (you can work on Selenium for all kind of web testing)
OOPS concepts are followed by most languages now , so how come Python can stay behind! Inheritance, Abstraction and Encapsulation are followed by Python as well.
Python as of now is divided into two versions popularly that are not much different in terms of performance but features. Python2.x and Python 3.x both have same syntax ,except for some statements like :-
*
*print "..." in Python2.x and print() in Python3.x
*raw_input() in Python2.x and input() in Python3.x (for getting user input)
In the end, client only cares about money and Python helps you save a lot as compared to PHP and Ruby , because instead of hiring experienced programmers , you can make a newbie learn and use Python expertly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Version control PHP Web Project We have a PHP project that we would like to version control. Right now there are three of us working on a development version of the project which resides in an external folder to which all of our Eclipse IDEs are linked, and thus no version control.
What is the right way and the best way to version control this?
We have an SVN set up, but we just need to find a good way to check in and out that allows us to test on the development server. Any ideas?
A: Here is what we do:
*
*Each dev has a VM that is configured like our integration server
*The integration server has space for Trunk, each user, and a few slots for branches
*The production server
*Hooks are in Subversion to e-mail when commits are made
At the beginning of a project, the user makes a branch and checks it out on their personal VM as well as grabs a clean copy of the database. They do their work, committing as they go.
Once they have finished everything in their own personal space they log into the integration server and check out their branch, run their tests, etc. When all that passes their branch is merged into Trunk.
Trunk is rebuilt, the full suite of tests are run, and if all is good it gets the big ol' stamp of approval, tagged in SVN, and promoted to Production at the end of the night.
If at any point a commit by someone else is made, we get an e-mail and can merge those changes into our individual branches.
A: Beanstalk has built-in post-commit hooks for deploying to development, staging, and production servers.
A: We were in a similar situation, and here's what we ended up doing:
*
*Set up two branches -- the release and development branch.
*For the development branch, include a post-commit hook that deploys the repository to the dev server, so you can test.
*Once you're ready, you merge your changes into the release branch. I'd also suggest putting in a post-commit hook for deployment there.
You can also set up individual development servers for each of the team members, on their workstations. I find that it speeds things up a bit, although you do have some more setup time.
We had to use a single development server, because we were using a proprietary CMS and ran into licensing issues. So our post-commit hook was a simple FTP bot.
A: One way to use subversion for PHP development is too setup a repository for one or all three developers, and use this repository, more as a syncing tool, than true version control.
You could,
*
*Make a repo
*Add your entire PHP document structure of your project
*Checkout a copy of this repo into the correct spot on your dev server
*Use an svn hook, that activates on commit
This hook, will automatically update the contents of the dev sever, whenever anybody on the team checks in any code.
Hook resides in:
svn_dir/repo_name/hooks/post-commit
And could look like:
/usr/bin/svn up /path_to/webroot --username svn_user --password svn_pass
That will update your working copy on the dev server to the latest check in.
A: What about something distributed? You can start for example with Mercurial, try different workflows, and see which one fits you the best.
A: Each of you could run it locally, or on your own dev server (or even the same one with a different port...).
A: One possible way (there are probably better ways):
Each of you should have your own checked out version of the project.
Have a local copy of the server on your computer and test it there throughout the day. Then at the end of each day (or whenever), you merge together whatever you are ready to test, and you check it out onto the dev server and test it.
A: Another tool you can use for the builds is TeamCity which is free for 20 build configurations (enough for most small companies/projects.) This way you can run your tests as well as schedule builds.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: What are good regular expressions? I have worked for 5 years mainly in java desktop applications accessing Oracle databases and I have never used regular expressions. Now I enter Stack Overflow and I see a lot of questions about them; I feel like I missed something.
For what do you use regular expressions?
P.S. sorry for my bad english
A: Regular Expressions (or Regex) are used to pattern match in strings. You can thus pull out all email addresses from a piece of text because it follows a specific pattern.
In some cases regular expressions are enclosed in forward-slashes and after the second slash are placed options such as case-insensitivity. Here's a good one :)
/(bb|[^b]{2})/i
Spoken it can read "2 be or not 2 be".
The first part are the (brackets), they are split by the pipe | character which equates to an or statement so (a|b) matches "a" or "b". The first half of the piped area matches "bb". The second half's name I don't know but it's the square brackets, they match anything that is not "b", that's why there is a roof symbol thingie (technical term) there. The squiggly brackets match a count of the things before them, in this case two characters that are not "b".
After the second / is an "i" which makes it case insensitive. Use of the start and end slashes is environment specific, sometimes you do and sometimes you do not.
Two links that I think you will find handy for this are
*
*regular-expressions.info
*Wikipedia - Regular expression
A: Consider an example in Ruby:
puts "Matched!" unless /\d{3}-\d{4}/.match("555-1234").nil?
puts "Didn't match!" if /\d{3}-\d{4}/.match("Not phone number").nil?
The "/\d{3}-\d{4}/" is the regular expression, and as you can see it is a VERY concise way of finding a match in a string.
Furthermore, using groups you can extract information, as such:
match = /([^@]*)@(.*)/.match("myaddress@domain.com")
name = match[1]
domain = match[2]
Here, the parenthesis in the regular expression mark a capturing group, so you can see exactly WHAT the data is that you matched, so you can do further processing.
This is just the tip of the iceberg... there are many many different things you can do in a regular expression that makes processing text REALLY easy.
A: Coolest regular expression ever:
/^1?$|^(11+?)\1+$/
It tests if a number is prime. And it works!!
N.B.: to make it work, a bit of set-up is needed; the number that we want to test has to be converted into a string of “1”s first, then we can apply the expression to test if the string does not contain a prime number of “1”s:
def is_prime(n)
str = "1" * n
return str !~ /^1?$|^(11+?)\1+$/
end
There’s a detailled and very approachable explanation over at Avinash Meetoo’s blog.
A: If you want to learn about regular expressions, I recommend Mastering Regular Expressions. It goes all the way from the very basic concepts, all the way up to talking about how different engines work underneath. The last 4 chapters also gives a dedicated chapter to each of PHP, .Net, Perl, and Java. I learned a lot from it, and still use it as a reference.
A:
A regular expression (regex or regexp for short) is a special text string for describing a search pattern. You can think of regular expressions as wildcards on steroids. You are probably familiar with wildcard notations such as *.txt to find all text files in a file manager. The regex equivalent is .*\.txt$.
A great resource for regular expressions: http://www.regular-expressions.info
A: If you're just starting out with regular expressions, I heartily recommend a tool like The Regex Coach:
http://www.weitz.de/regex-coach/
also heard good things about RegexBuddy:
http://www.regexbuddy.com/
A: As you may know, Oracle now has regular expressions: http://www.oracle.com/technology/oramag/webcolumns/2003/techarticles/rischert_regexp_pt1.html. I have used the new functionality in a few queries, but it hasn't been as useful as in other contexts. The reason, I believe, is that regular expressions are best suited for finding structured data buried within unstructured data.
For instance, I might use a regex to find Oracle messages that are stuffed in log file. It isn't possible to know where the messages are--only what they look like. So a regex is the best solution to that problem. When you work with a relational database, the data is usually pre-structured, so a regex doesn't shine in that context.
A: These RE's are specific to Visual Studio and C++ but I've found them helpful at times:
Find all occurrences of "routineName" with non-default params passed:
routineName\(:a+\)
Conversely to find all occurrences of "routineName" with only defaults:
routineName\(\)
To find code enabled (or disabled) in a debug build:
\#if._DEBUG*
Note that this will catch all the variants: ifdef, if defined, ifndef, if !defined
A: Validating strong passwords:
This one will validate a password with a length of 5 to 10 alphanumerical characters, with at least one upper case, one lower case and one digit:
^(?=.*[A-Z])(?=.*[a-z])(?=.*[0-9])[a-zA-Z0-9]{5,10}$
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Setting a div's height in HTML with CSS I am trying to lay out a table-like page with two columns. I want the rightmost column to dock to the right of the page, and this column should have a distinct background color. The content in the right side is almost always going to be smaller than that on the left. I would like the div on the right to always be tall enough to reach the separator for the row below it. How can I make my background color fill that space?
.rightfloat {
color: red;
background-color: #BBBBBB;
float: right;
width: 200px;
}
.left {
font-size: 20pt;
}
.separator {
clear: both;
width: 100%;
border-top: 1px solid black;
}
<div class="separator">
<div class="rightfloat">
Some really short content.
</div>
<div class="left">
Some really really really really really really
really really really really big content
</div>
</div>
<div class="separator">
<div class="rightfloat">
Some more short content.
</div>
<div class="left">
Some really really really really really really
really really really really big content
</div>
</div>
Edit: I agree that this example is very table-like and an actual table would be a fine choice. But my "real" page will eventually be less table-like, and I'd just like to first master this task!
Also, for some reason, when I create/edit my posts in IE7, the code shows up correctly in the preview view, but when I actually post the message, the formatting gets removed. Editing my post in Firefox 2 seems to have worked, FWIW.
Another edit: Yeah, I unaccepted GateKiller's answer. It does indeed work nicely on my simple page, but not in my actual heavier page. I'll investigate some of the links y'all have pointed me to.
A: Give this a try:
html, body,
#left, #right {
height: 100%
}
#left {
float: left;
width: 25%;
}
#right {
width: 75%;
}
<html>
<body>
<div id="left">
Content
</div>
<div id="right">
Content
</div>
</body>
</html>
A: Some browsers support CSS tables, so you could create this kind of layout using the various CSS display: table-* values. There's more information on CSS tables in this article (and the book of the same name) by Rachel Andrew: Everything You Know About CSS is Wrong
If you need a consistent layout in older browsers that don't support CSS tables, you need to do two things:
*
*Make your "table row" element clear its internal floated elements.
The simplest way of doing this is to set overflow: hidden which takes care of most browsers, and zoom: 1 to trigger the hasLayout property in older versions of IE.
There are many other ways of clearing floats, if this approach causes undesirable side effects you should check the question which method of 'clearfix' is best and the article on having layout for other methods.
*Balance the height of the two "table cell" elements.
There are two ways you could approach this. Either you can create the appearance of equal heights by setting a background image on the "table row" element (the faux columns technique) or you can make the heights of the columns match by giving each a large padding and equally large negative margin.
Faux columns is the simpler approach and works very well when the width of one or both columns is fixed. The other technique copes better with variable width columns (based on percentage or em units) but can cause problems in some browsers if you link directly to content within your columns (e.g. if a column contained <div id="foo"></div> and you linked to #foo)
Here's an example using the padding/margin technique to balance the height of the columns.
html, body {
height: 100%;
}
.row {
zoom: 1; /* Clear internal floats in IE */
overflow: hidden; /* Clear internal floats */
}
.right-column,
.left-column {
padding-bottom: 1000em; /* Balance the heights of the columns */
margin-bottom: -1000em; /* */
}
.right-column {
width: 20%;
float: right;
}
.left-column {
width: 79%;
float: left;
}
<div class="row">
<div class="right-column">Right column content</div>
<div class="left-column">Left column content</div>
</div>
<div class="row">
<div class="right-column">Right column content</div>
<div class="left-column">Left column content</div>
</div>
This Barcamp demo by Natalie Downe may also be useful when figuring out how to add additional columns and nice spacing and padding: Equal Height Columns and other tricks (it's also where I first learnt about the margin/padding trick to balance column heights)
A: I gave up on strictly CSS and used a little jquery:
var leftcol = $("#leftcolumn");
var rightcol = $("#rightcolumn");
var leftcol_height = leftcol.height();
var rightcol_height = rightcol.height();
if (leftcol_height > rightcol_height)
rightcol.height(leftcol_height);
else
leftcol.height(rightcol_height);
A: Here's an example of equal-height columns - Equal Height Columns - revisited
You can also check out the idea of "Faux Columns" as well - Faux Columns
Don't go the table route. If it's not tabular data, don't treat it as such. It's bad for accessibility and flexibility.
A: Ahem...
The short answer to your question is that you must set the height of 100% to the body and html tag, then set the height to 100% on each div element you want to make 100% the height of the page.
Actually, 100% height will not work in most design situations - this may be short but it is not a good answer. Google "any column longest" layouts. The best way is to put the left and right cols inside a wrapper div, float the left and right cols and then float the wrapper - this makes it stretch to the height of the inner containers - then set background image on the outer wrapper. But watch for any horizontal margins on the floated elements in case you get the IE "double margin float bug".
A: I had the same problem on my site (shameless plug).
I had the nav section "float: right" and the main body of the page has a background image about 250px across aligned to the right and "repeat-y". I then added something with "clear: both" to it. Here is the W3Schools and the CSS clear property.
I placed the clear at the bottom of the "page" classed div. My page source looks something like this.
body
-> header (big blue banner)
-> headerNav (green bar at the top)
-> breadcrumbs (invisible at the moment)
-> page
-> navigation (floats to the right)
-> content (main content)
-> clear (the quote at the bottom)
-> footerNav (the green bar at the bottom)
-> clear (empty but still does something)
-> footer (blue thing at the bottom)
I hope that helps :)
A: No need to write own css, there is an library called "Bootstrap css" by calling that in your HTML head section, we can achieve many stylings,Here is an example:
If you want to provide two column in a row, you can simply do the following:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<div class="row">
<div class="col-md-6">Content</div>
<div class="col-md-6">Content</div>
</div>
Here md stands for medium device,,you can use col-sm-6 for smaller devices and col-xs-6 for extra small devices
A: Just trying to help out here so the code is more readable.
Remember that you can insert code snippets by clicking on the button at the top with "101010". Just enter your code then highlight it and click the button.
Here is an example:
<html>
<body>
<style type="text/css">
.rightfloat {
color: red;
background-color: #BBBBBB;
float: right;
width: 200px;
}
.left {
font-size: 20pt;
}
.separator {
clear: both;
width: 100%;
border-top: 1px solid black;
}
</style>
A: The short answer to your question is that you must set the height of 100% to the body and html tag, then set the height to 100% on each div element you want to make 100% the height of the page.
A: A 2 column layout is a little bit tough to get working in CSS (at least until CSS3 is practical.)
Floating left and right will work to a point, but it won't allow you to extend the background. To make backgrounds stay solid, you'll have to implement a technique known as "faux columns," which basically means your columns themselves won't have a background image. Your 2 columns will be contained inside of a parent tag. This parent tag is given a background image that contains the 2 column colors you want. Make this background only as big as you need it to (if it is a solid color, only make it 1 pixel high) and have it repeat-y. AListApart has a great walkthrough on what is needed to make it work.
http://www.alistapart.com/articles/fauxcolumns/
A: I can think of 2 options
*
*Use javascript to resize the smaller column on page load.
*Fake the equal heights by setting the background-color for the column on the container <div/> instead (<div class="separator"/>) with repeat-y
A: It's enough to just use the css property width to do so.
Here is an example:
<style type="text/css">;
td {
width:25%;
height:100%;
float:left;
}
</style>
A: This should work for you: Set the height to 100% in your css for the html and body elements. You can then adjust the height to your needs in the div.
html {
height: 100%;
}
body {
height: 100%;
}
div {
height: 100%; /* Set Div Height */
}
A:
.rightfloat {
color: red;
background-color: #BBBBBB;
float: right;
width: 200px;
}
.left {
font-size: 20pt;
}
.separator {
clear: both;
width: 100%;
border-top: 1px solid black;
}
<div class="separator">
<div class="rightfloat">
Some really short content.
</div>
<div class="left">
Some really really really really really really
really really really really big content
</div>
</div>
<div class="separator">
<div class="rightfloat">
Some more short content.
</div>
<div class="left">
Some really really really really really really
really really really really big content
</div>
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: How should I organize my master ddl script I am currently creating a master ddl for our database. Historically we have used backup/restore to version our database, and not maintained any ddl scripts. The schema is quite large.
My current thinking:
*
*Break script into parts (possibly in separate scripts):
*
*table creation
*add indexes
*add triggers
*add constraints
*Each script would get called by the master script.
*I might need a script to drop constraints temporarily for testing
*There may be orphaned tables in the schema, I plan to identify suspect tables.
Any other advice?
Edit: Also if anyone knows good tools to automate part of the process, we're using MS SQL 2000 (old, I know).
A: I think the basic idea is good.
The nice thing about building all the tables first and then building all the constraints, is that the tables can be created in any order. When I've done this I had one file per table, which I put in a directory called "Tables" and then a script which executed all the files in that directory. Likewise I had a folder for constraint scripts (which did foreign key and indexes too), which were executed when after the tables were built.
I would separate the build of the triggers and stored procedures, and run these last. The point about these is they can be run and re-run on the database without affecting the data. This means you can treat them just like ordinary code. You should include "if exists...drop" statements at the beginning of each trigger and procedure script, to make them re-runnable.
So the order would be
*
*table creation
*add indexes
*add constraints
Then
*add triggers
*add stored procedures
On my current project we are using MSBuild to run the scripts. There are some extension targets that you can get for it which allow you to call sql scripts. In the past I have used perl which was fine too (and batch files...which I would not recommend - the're too limited).
A: What you have there seems to be pretty good. My company has on occasion, for large enough databases, broken it down even further, perhaps to the individual object level. In this way each table/index/... has its own file. Can be useful, can be overkill. Really depends on how you are using it.
@Justin
By domain is mostly always sufficient. I agree that there are some complexities to deal with when doing it this way, but that should be easy enough to handle.
I think this method provides a little more seperation (which in a large database you will come to appreciate) while still making itself pretty manageable. We also write Perl scripts that do a lot of the processing of these DDL files, so that might be an option of a good way to handle that.
A: Invest the time to write a generic "drop all constraints" script, so you don't have to maintain it.
A cursor over the following statements does the trick.
Select * From Information_Schema.Table_Constraints
Select * From Information_Schema.Referential_Constraints
A: @Adam
Or how about just by domain -- a useful grouping of related tables in the same file, but separate from the rest?
Only problem is if some domains (in this somewhat legacy system) are tightly coupled. Plus you have to maintain the dependencies between your different sub-scripts.
A: If you are looking for an automation tool, I have often worked with EMS SQLManager, which allows you to generate automatically a ddl script from a database.
Data inserts in reference tables might be mandatory before putting your database on line. This can even be considered as part of the ddl script. EMS can also generate scripts for data inserts from existing databases.
Need for indexes might not be properly estimated at the ddl stage. You will just need to declare them for primary/foreign keys. Other indexes should be created later, once views and queries have been defined
A: there is a neat tools that will iterate through the entire sql server and extract all the table, view, stored proceedures and UDF defintions to the local file system as SQL scripts (Text Files). I have used this with 2005 and 2008, not sure how it wil work with 2000 though. Check out http://www.antipodeansoftware.com/Home/Products
A: I previously organised my DDL code organised by one file per entity and made a tool that combined this into a single DDL script.
My former employer used a scheme where all table DDL was in one file (stored in oracle syntax), indicies in another, constraints in a third and static data in a fourth. A change script was kept in paralell with this (again in Oracle). The conversion to SQL was manual. It was a mess. I actually wrote a handy tool that will convert Oracle DDL to SQL Server (it worked 99.9% of the time).
I have recently switched to using Visual Studio Team System for Database professionals. So far it works fine, but there are some glitches if you use CLR functions within the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What does this error mean SECJ0222E in WebSphere Application Server 5.1 I found this on the IBM support site:
ProblemA JAAS LoginContext could not be created due to the unexpected exception.
User responseThe problem could be due to a configuration error.
but I have no other indication and can't determine the final reason for this error.
Any suggestions?
A: Have you obtained the fix from
http://www-1.ibm.com/support/docview.wss?rs=404&uid=swg1PK17150?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: .NET Remoting Speed and VPNs I'm working on a project which uses .NET Remoting for communication between the client application and an object server. For development, the client, server, and MSSQL database are all running on my local development machine.
When I'm working at the office, the responsiveness is just fine.
However, when I work from home the speed is significantly slower. If I disconnect from the VPN, it speeds up (I believe, but maybe that's just wishful thinking). If I turn off my wireless connection completely it immediately speeds up to full throttle.
My assumption is that the remoting traffic is being routed through some point that is slowing everything down, albeit my home router and/or the VPN.
Does anyone have any ideas of how to force the remoting traffic to remain completely localized?
A: Perhaps during development you could use an IPC remoting channel which uses named pipes instead of TCP. If your remoting channels are set up via a config file then you won't even have to recompile.
I found the link below was useful when setting up an IPC channel.
http://www.danielmoth.com/Blog/2004/09/ipc-with-remoting-in-net-20.html
A: I had this same problem where immediately when you disconnect things are tremendously better.
If you are using windows VPN, you have to change a default setting. It will force a connection to use the remote router as your gateway while connected. If you go to properties for the connection, then to the networking tab. Select TCP/IPv4 and go to properties. In this window select Advanced... and there will be an option to use the default gateway on the remote network, make sure this is NOT checked. This should help immensely.
A: I worked on a project last summer that required some pretty heavy modifications to .NET Remoting. I don't remember all the specifics, but if we had more than one network interface, we couldn't get the out-of-the-box Remoting implementation to reliably detect which one the Remoting traffic came from, which did horrible things to performance. This sounds like a similar, if not the same, issue.
A: I don't have any VPN connections on my current computer but somewhere in the TCP/IP properties for the connection there's a checkbox to indicate that you use the remote host as a gateway or something like that.
This once caused me alot of issues since all my traffic would go over the VPN and then back again, even when I wanted to do something locally.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Error ADMA5026E for WebSphere Application Server Network Deployment What I'm doing wrong that I get the ADMA5026E error when deployment an application with the NetworkDeployment Console?
A: Try
IBM Information Center
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |