instruction
stringlengths
0
30k
I've been working with [providers][1] a fair bit lately and I came across an interesting situation where I wanted to have an abstract class that had an abstract static method. I read a few posts on the topic and it sort of made sense, but I was wondering if someone here might be able to offer a nice clear explanation? [1]: http://msdn.microsoft.com/en-us/library/aa479030.aspx
|c#|.net|
I can think of 2 options 1. Use javascript to resize the smaller column on page load. 2. Fake the equal heights by setting the background color for the column on the container div instead (<div class="separator"/>) with repeat-y
I've recently been looking into targeting the .NET Client Profile for a WPF application I am building. However, I was frustrated to notice that the Client Profile is only valid for the following OS configurations: - Windows XP SP2+ - <strike>Windows Server 2003</strike> **Edit:** [Appears][1] the Client Profile will not install on Windows Server 2003. In addition, the client profile is **not** valid for x64 or ia64 editions; and will also not install if *any previous version of the .NET Framework has been installed*. I'm wondering if the effort in adding the extra OS configurations to the testing matrix is worth the effort. Is there any metrics available that state the percentage of users that could possibly benefit from the client profile? I believe that once the .NET Framework has been installed, extra information is passed to a web server as part of a web request signifying that the framework is available. Granted, I would imagine that Windows XP SP2 users without the .NET Framework installed would be a large amount of people. It would then be a question of whether my application targeted those individuals specifically. Has anyone else determined if it is worth the extra effort to target these specific users? **Edit: It seems that it is possible to get a compiler warning if you use features not included in the Client Profile. As I usually run with warnings as errors, this will hopefully be enough to minimise testing in this configuration.** Of course, this configuration will still need to be tested, but it should be as simple as testing if the install/initial run works on XP with SP2+. [1]: http://blogs.windowsclient.net/trickster92/archive/2008/05/21/introducing-the-net-framework-client-profile.aspx
I often find myself defining some things that are done repetitively in certain functions. That makes the code much shorter and thus allows a better overview. But as always, try to find a good measure to not create a new language out of it. Might be a little hard to read for the occasional maintenance later on.
In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define/#ifs in C# is for adding optional debug only code. For example: static void Main(string[] args) { #if DEBUG //this only compiles if in DEBUG Console.WriteLine("DEBUG") #endif #if !DEBUG //this only compiles if not in DEBUG Console.WriteLine("RELEASE") #endif //This always compiles Console.ReadLine() }
#define is used to define compile-time constants that you can use with #if to include or exclude bits of code. #define USEFOREACH #if USEFOREACH foreach(var item in items) { #else for(int i=0; i < items.Length; ++i) { var item = item[0]; #endif doSomethingWithItem(item); }
I've heard the books currently out there are pretty out of date. The whole ecosystem seems to evolve very fast with dramatic changes made in every OS release. He wrote a [tutorial][1] which pulls together some Apple documentation and other tutorials which should get you started. I think it covers the basics of using the IDE, writing simple apps, and then goes on to more advanced stuff. [1]: http://andymatuschak.org/articles/2007/09/09/getting-started-with-cocoa-a-friendlier-approach
You should probably just use the standard ASP.Net LinkButton, unless it's really missing something you need.
A basic paper version is the way to go for an initial mock-up. It's been my experience that if you do a "real" mock-up, even if you explain to the customer that it's a non-functional mock-up, they are confused when things don't work. Bottom line: keep it as simple as possible. If it's on paper, there is no way the customer will confuse it with a working product.
Always start with paper or paper-like mock-ups first. You do not want to fall into a trap of giving the impression of completeness when the back-end is completely hollow.
Always start with paper or paper-like mock-ups first. You do not want to fall into a trap of giving the impression of completeness when the back-end is completely hollow. Jeff discusses paper prototyping in his Coding Horror article [UI-First Software Development][1] Click the "Watch a video!" link at [twitter.com][2] to see an interesting take on the idea from [Common Craft][3]. [1]: http://www.codinghorror.com/blog/archives/001091.html [2]: http://twitter.com/ [3]: http://www.commoncraft.com/
Always start with paper or paper-like mock-ups first. You do not want to fall into a trap of giving the impression of completeness when the back-end is completely hollow. A polished prototype or pixel-perfect example puts too much emphasis on the design. With an obvious sketch, you have a better shot of discussing desired functionality and content rather than colors, photos, and other stylistic matters. There will be time for that discussion later in the project. Jeff discusses paper prototyping in his Coding Horror article [UI-First Software Development][1] Click the "Watch a video!" link at [twitter.com][2] to see an interesting take on the idea from [Common Craft][3]. [1]: http://www.codinghorror.com/blog/archives/001091.html [2]: http://twitter.com/ [3]: http://www.commoncraft.com/
File paths in the DB is **definitely** the way to go.
File paths in the DB is **definitely** the way to go - I've heard story after story from customers with TB of images that it became a nightmare trying to store any significant amount of images in a DB - the performance hit alone is too much.
Greg Hurlman wrote: > Why not just show/hide the proper container when a node is selected in the grid? Have the containers all sized appropriately in the same spot, and hide all but the default, which would be preselected in the grid on load. Unfortunately, that's what I'm trying to avoid. I'm looking for an easy way to handle the interface during design time, with minimal reformatting code needed to get it working during run time. I like Duncan's answer because it means the design of each node's interface can be kept *completely* separate. This means I don't get overlap on the snapping guidelines and other design time advantages.
Full complete mysql DB replication? Ideas? What do people do?
|mysql|replication|
Currently I have two linux servers wunning MySQL, one sitting on a rack right next to me under a 10Mbit upload pipe (main server) and another some couple of miles away on a 3Mbit upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Lets say this happens on a meaningful block of data once every month, so I can live with it and assume its a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full DB replication). Just wondering what people out there do. The way its structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server. Thx. /mp
There shouldn't be. Virtual Machines emulate the pieces of hardware you can't access directly, but actual code is executed directly on the host CPU 99.9% of the time. Your best bet is trying to identify known VM environments, but that will give you some false positives and many missed detections. Identifying the CPU directly will give you the **real** CPU data, since modern VMs don't emulate that part of a computer anymore.
I can see the main benefit of employing XSLT to transform your data and display it to the user would be the following: * The data is already in an XML format * The data follows a well defined schema (this makes using tools like XMLSpy much easier). * The data needs to be transformed into a number of different output formats, e.g. PDF, WMP and HTML If this is to be the only output for your data, and it is not in XML format, then XSLT might not be the best solution. Likewise if user interaction is required (such as editing of the data) then you will end up employing back-end code anyway to handle updates so might prove one technology too far...
Make an array of controls. TextBox[] textboxes = new TextBox[] { textBox1, textBox2, textBox3 };
Is MySQL replication an option? You could even turn it on and off if you didn't want it constantly replicating. This was a [good article](http://www.howtoforge.com/mysql_database_replication) on replication.
You can create table variables (in memory), and two different types of temp table: --visible only to me, in memory (SQL 2000 and above only) declare @test table ( Field1 int, Field2 nvarchar(50) ); --visible only to me, stored in tempDB create table #test ( Field1 int, Field2 nvarchar(50) ) --visible to everyone, stored in tempDB create table ##test ( Field1 int, Field2 nvarchar(50) )
You can create table variables (in memory), and two different types of temp table: --visible only to me, in memory (SQL 2000 and above only) declare @test table ( Field1 int, Field2 nvarchar(50) ); --visible only to me, stored in tempDB create table #test ( Field1 int, Field2 nvarchar(50) ) --visible to everyone, stored in tempDB create table ##test ( Field1 int, Field2 nvarchar(50) ) ---------- **Edit:** Following feedback I think this needs a little clarification. `#table` and `##table` will always be in TempDB. `@Table` variables will normally be in memory, but are not guaranteed to be. SQL decides based on the query plan, and uses TempDB if it needs to.
Another nice thing that VB .NET does is having a single event handler that handles multiple controls: Private Sub TextBox_TextChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) _ Handles TextBox1.TextChanged, _ TextBox2.TextChanged, _ TextBox3.TextChanged End Sub
How sophisticated should my Ajax code be ?
Try out [ScintillaNET][1] it's a .NET WinForms wrapper around the excellent [Scintilla][2] control. Scintilla itself is a free source code editor component that is very customisable and has all the features you asked for. See [here][3] for a screenshot. [1]: http://www.codeplex.com/ScintillaNET [2]: http://scintilla.org/ [3]: http://scintilla.sourceforge.net/SciTEImage.html
Little addition, instead of using switch(input) use... switch (toupper(input)) { case 'A': This will allow the user to enter 'a' or 'A' and saves you having to check for upper and lower case
This is a purely subjective thing I am afraid. - May it is because of your low end system configuration. - May be VS trying to get updates from the net? - May be you are running too many application in the background. - May be you are trying to open a huge solution.
How does Web Routing Work?
|c#|asp.net-mvc|
I need a good understanding of the inner workings of System.Web.Routing. Usually we define the RoutesTable. But how does it do the routing? The reason I'm asking it is that I want to pass the routing to subapps. What I want to see working is a way of passing the current request to mvc apps that work in other AppDomains. Just to make it clear this is what I'm imagining I have a MVC APP that only has the barebone Global.asax and that loads in other app domains some dlls that are mvc apps.. and the comunication is done through a transparent proxy created through _appDomain.CreateInstanceAndUnwrap(...). Hope this is clear enough.
I need a good understanding of the inner workings of System.Web.Routing. Usually we define the RoutesTable. But how does it do the routing? The reason I'm asking it is that I want to pass the routing to subapps. What I want to see working is a way of passing the current request to mvc apps that work in other AppDomains. Just to make it clear this is what I'm imagining I have a MVC APP that only has the barebone Global.asax and that loads in other app domains some dlls that are mvc apps.. and the comunication is done through a transparent proxy created through _appDomain.CreateInstanceAndUnwrap(...). Hope this is clear enough. **Edit:** from what I can tell the codebehind Default.aspx is invocked on the first page reguest and that starts the MvcHttpHandler that does all the voodoo of displaying the pages we are requesting. So it might just be a matter of passing the http context. If you have any ideas on matter please post your thoughts.
I need a good understanding of the inner workings of System.Web.Routing. Usually we define the RoutesTable. But how does it do the routing? The reason I'm asking it is that I want to pass the routing to subapps. What I want to see working is a way of passing the current request to mvc apps that work in other AppDomains. Just to make it clear this is what I'm imagining I have a MVC APP that only has the barebone Global.asax and that loads in other app domains some dlls that are mvc apps.. and the comunication is done through a transparent proxy created through _appDomain.CreateInstanceAndUnwrap(...). Hope this is clear enough. **Edit:** from what I can tell the codebehind Default.aspx is invoked on the first page reguest and that starts the MvcHttpHandler that does all the voodoo of displaying the pages we are requesting. So it might just be a matter of passing the http context. If you have any ideas on matter please post your thoughts.
One other suggestion: If you have feature or enhancement requests, or bugs that you've run into, be sure to file them at Apple's [Bug Reporter][1]. It's the best way for developers to communicate their needs to Apple, because every issue is tracked through the system. [1]: http://bugreport.apple.com/
You may consider using a command like Linux's [dos2unix][1] for the conversion. Being a Linux command, it is easy to use it in batch mode with scripts etc. I do not know whether there is an equivalent for other operating systems. [1]: http://www.linuxcommand.org/man_pages/dos2unix1.html
SQL injection attacks are on the upswing. It's very easy for someone to find this code and run injection attacks on your website. You must always _**always**_ parameterize your queries. It's best to never run exec(@x) on a dynamic SQL query. It's just not a great idea to use inline SQL ever, IMO. Stored Procedures, as argued by some, are a hassle because they are another set of items to maintain separate from your code. But they are reusable and if you end up finding a bug in your queries, you can at fix them without recompiling.
@Ed: When using C++, there is rarely any benefit for using #define over inline functions when creating macros. The idea of "greater speed" is a misconception. With inline functions you get the same speed, but you also get type safey, and no side-effects of preprocessor "pasting" due to the fact that parameters are evaluated before the function is called (for an example, try writing the ubiquitous MAX macro, and call it like this: MAX(x++, y).. you'll see what I'm getting at). I have never had to use #define in my C#, and I very rarely use it for anything other that platform and compiler version checking for conditional compilation in C++.
http://en.wikipedia.org/wiki/Boxing_%28Computer_science%29#Boxing thats the correct link for boxing as related to computer science :D
> Is #define the only thing that > determines if the code is included > when compiled? If I have #define > DEBUGme as a custom symbol, the only > way to exclude it from compile is to > remove this #define statement? You can [undefine][1] symbols as well #if defined(DEBUG) #undef DEBUG #endif [1]: http://msdn.microsoft.com/en-us/library/wkxst87d(VS.80).aspx
CruiseControl.NET solves a different problem (continuous integration) ... however, I've had great success with NAnt for specifically what you're asking. There's a learning curve, but once you get proficient you'll wonder how you ever got along w/o it.
The most important feature I can't live w/o is VS.Net 2008. :P
The main thing is being able to abstract the complexity of the system so that you don't get bogged down by it as soon as you start off. * First read the spec like a story (skimming through it). Don't stop at every requirement to analyze it right there and then. This will allow you to get an overall picture of the system without too many details. At this point you would start identifying the major functional components of the system. Start putting these down (use a mindmap tool if you like). * Then take each component and start exploding it (and tying each detail with requirements in the spec document). Do this for all components, till you have covered all requirements. * Now, you should start looking at relationships between the components, and whether there are repetitions of features or functions across the various components (which you can then pull out to create utility components, or such). Around now, you would have a good detailed map of your requirements in your mind. * NOW, you should think of designing the database, ER diagrams, Class Design, DFDs, deployment, etc. The problem with doing the last step first is that you can get bogged down in the complexity of your system without really gaining an overall understanding in the first place.
I recently bought a Vista machine for my home and when I use ssh to log in at work Vista pops up a box complaining about a process that died. Once I dismiss the box, everything is fine. So I really don't care if some process died. How do I get Vista to shut up about it? * * * I'm at work right now, so I can't test any answers or provide the text of the dialog box. I plan to update this question later when I have more details and time. I'm asking the question without details so that I won't forget later. Please remain calm! ;-)
I recently bought a Vista machine for my home and when I use ssh to log in at work Vista pops up a box complaining about a process that died unexpectedly. Once I dismiss the box, everything is fine. So I really don't care if some process died. How do I get Vista to shut up about it? * * * I'm at work right now, so I can't test any answers or provide the text of the dialog box. I plan to update this question later when I have more details and time. I'm asking the question without details so that I won't forget later. Please remain calm! ;-)
|cygwin|ssh|windows-vista|x11|openssh|
I recently bought a Vista machine for my home and when I use ssh to log in at work Vista pops up a box complaining about a process that died unexpectedly. Once I dismiss the box, everything is fine. So I really don't care if some process died. How do I get Vista to shut up about it? * * * Specifically, the message reads: sh.exe has stopped working So it's not ssh itself that died, but some sub-process. The problem details textbox reads: Problem signature: Problem Event Name: APPCRASH Application Name: sh.exe Application Version: 0.0.0.0 Application Timestamp: 48a031a1 Fault Module Name: comctl32.dll_unloaded Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4549bcb0 Exception Code: c0000005 Exception Offset: 73dc5b17 OS Version: 6.0.6000.2.0.0.768.3 Locale ID: 1033 Additional Information 1: fc4d Additional Information 2: d203a7335117760e7b4d2cf9dc2925f9 Additional Information 3: 1bc1 Additional Information 4: 7bc0b00964c4a1bd48f87b2415df3372 Read our privacy statement: http://go.microsoft.com/fwlink/?linkid=50163&clcid=0x0409 I notice the problem occurs when I use the **-Y** option to enable X11 forwarding in an X terminal under Vista. The dialog box that pops up doesn't automatically gain focus, so pressing Enter serves no purpose. I have to wait for the box to appear, grab it with the mouse, and dismiss it. Even forcing the error to receive focus would be a step in the right direction.
Why does Vista complain about a dead process when I use Cygwin X11 ssh and how do I get it to shut up?
|windows-vista|ssh|cygwin|x11|openssh|
When I log into a remote machine using ssh X11 forwarding, Vista pops up a box complaining about a process that died unexpectedly. Once I dismiss the box, everything is fine. So I really don't care if some process died. How do I get Vista to shut up about it? * * * Specifically, the message reads: sh.exe has stopped working So it's not ssh itself that died, but some sub-process. The problem details textbox reads: Problem signature: Problem Event Name: APPCRASH Application Name: sh.exe Application Version: 0.0.0.0 Application Timestamp: 48a031a1 Fault Module Name: comctl32.dll_unloaded Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4549bcb0 Exception Code: c0000005 Exception Offset: 73dc5b17 OS Version: 6.0.6000.2.0.0.768.3 Locale ID: 1033 Additional Information 1: fc4d Additional Information 2: d203a7335117760e7b4d2cf9dc2925f9 Additional Information 3: 1bc1 Additional Information 4: 7bc0b00964c4a1bd48f87b2415df3372 Read our privacy statement: http://go.microsoft.com/fwlink/?linkid=50163&clcid=0x0409 I notice the problem occurs when I use the **-Y** option to enable X11 forwarding in an X terminal under Vista. The dialog box that pops up doesn't automatically gain focus, so pressing Enter serves no purpose. I have to wait for the box to appear, grab it with the mouse, and dismiss it. Even forcing the error to receive focus would be a step in the right direction. * * * Per [DrPizza](#35944) I have sent an email to the Cygwin mailing list.
I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again.
GTK implementation of MessageBox
|unix|linux|x11|gtk|sdl|opengl|
I have been trying to implement Win32's MessageBox using GTK. The app using SDL/OpenGL, so this isn't a GTK app. I handle the initialisation (**gtk_init**) sort of stuff inside the MessageBox function as follows: int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type) { GtkWidget *window = NULL; GtkWidget *dialog = NULL; gtk_init(&gtkArgc, &gtkArgv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); // gcallback calls gtk_main_quit() gtk_init_add((GtkFunction)gcallback, NULL); if (type & MB_YESNO) { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text); } else { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text); } gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog)); gtk_main(); gtk_widget_destroy(dialog); if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; break; case GTK_RESPONSE_YES: return IDYES; break; } } return IDOK; } Now, I am by no means an experienced GTK programmer, and I realise that I'm probably doing something(s) horribly wrong. However, my problem is that the last dialog popped up with this function stays around until the process exits. Any ideas?
|c++|c|linux|unix|gtk|x11|
Any assemblies that are expected to be in the GAC should stay in the GAC. This includes System.web.dll or any other 3rd party dll that you'll deploy to the GAC in production. This means a new developer would have to install these assemblies. All other 3rd party assemblies should be references through a relative path. My typical structure is: -Project --Project.sln --References ---StructureMap.dll ---NUnit.dll ---System.Web.Mvc.dll --Project.Web ---Project.Web.Proj ---Project.Web.Proj files --Project ---Project.Proj ---Project.Proj files Project.Web and Project reference the assemblies in the root/References folder relatively. These .dlls are checked into subversion. Aside from that, \*/bin \*/bin/\* obj should be in your global ignore path. With this setup, all references to assemblies are either through the GAC (so should work across all computers), or relative to each project within your solution.
Is this a .Net specific question? Generally the best practice is to not check in anything which is built automatically from files that are already in SCM. All of that is ideally created as part of your automatic build process. If the `bin` directory you're referring to contains third-party binaries, rather than a build of your project, ignore (downvote?) this advice.
If it VM does the job well, it should be invisible to the client that it's being virtualized. However, one can look at other clues. I would imagine that looking for known drivers or software specific to the VM environment would be the best possible way. For example, on a VMWare client running Windows, vmxnet.sys would be the network driver, displayed as VMware accelerated AMD PCNet Adapter.
There are two aspects. .NET readily supports arrays of controls, VB6 just had to use a workaround because otherwise, wiring up events was really hard. In .NET, wiring up events dynamically is easy. However, the .NET form designer does not support control arrays for a simple reason: arrays of controls are created/extended at run time. If you know how many controls you need at compile time (the reasoning goes) then you give them different names and don't put them in an array. > I know it's not very useful code That's exactly the point. Why have a feature if it's useless? If needed, you can also access a control by name, resulting in something like this: Private Sub Command_Click(sender As Object, e As EventArgs) Handles Command1.Click, Command2.Click … Dim name As String = DirectCast(sender, Control).Name Dim index As Integer = Integer.Parse(name.Substring("Command".Length)) Controls(String.Format("Text {0}", index)).Text = Timer.Value.ToString() End Sub
I would create a (Ruby) script to do a <code>SELECT * FROM ...</code> on all the databases on the server and then do a <code>DROP DATABASE ...</code> followed by a series of new <code>INSERT</code>s on the local copy. You can do a <code>SHOW DATABASES</code> query to list the databases dynamically. Now, this assumes that the table structure doesn't change, but if you want to support table changes also you could add a <code>SHOW CREATE TABLE ...</code> query and a corresponding <code>CREATE TABLE</code> statement for each table in each database. To get a list of all the tables in a database you do a <code>SHOW TABLES</code> query. Once you have the script you can set it up as a scheduled job to run as often as you need.
I agree with Robocopy as a solution...thats why the utility is called *"Robust File Copy"* > I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across. And by default, a million retries. That should be plenty for your intermittent connection. It also does restartable transfers and you can even throttle transfers with a gap between packets assuing you don't want to use all the bandwidth as other programs are using the same connection (/IPG switch)?.
Martin's answer is correct on Windows Server 2003: You will have to assign the "Lock pages in memory" privilege to any user that runs your application. This includes administrators 1. Select Control Panel -> Administrative Tools -> Local Security Policy 2. Select Local Policies -> User Rights Assignment 3. Double click "Lock pages in memory", add users and/or groups 4. Reboot the machine On Windows Vista you need also make sure that the application is run as Administrator (by right-clicking on the application or the shell and choosing "Run as adminstrator". In addition, it helps to have a freshly booted machine since the large pages can "run out" due to fragmentation of the heap.
We, too, just put the binaries in source control. We use Git, but it would apply just as well to Subversion. One suggestion I have is to use SVGs where possible, because you can see actual differences. With binaries (most other image formats), the best you can get is a version history.
A lot of the graphics type people will want something more sophisticated than subversion. While it's good for version control, they will want a content management system that allows cross-referencing of assets, tagging, thumbnails and that sort of thing (as well as versioning).
From [Rick Strahl][1]: You can chain the ?? operator so that so that you can do a bunch of null comparisons. string result = value1 ?? value2 ?? value3 ?? String.Empty; [1]: http://www.west-wind.com/weblog/posts/236298.aspx
From [Rick Strahl][1]: You can chain the ?? operator so that you can do a bunch of null comparisons. string result = value1 ?? value2 ?? value3 ?? String.Empty; [1]: http://www.west-wind.com/weblog/posts/236298.aspx
Yes, having art assets in version control is very useful. You get the ability to track history, roll back changes, and you have a single source to do backups with. Keep in mind that art assets are MUCH larger so your server needs to have lots of disk space & network bandwidth. I've had success with using [perforce][1] on very large projects (+100 GB), however we had to wrap access to the version control server with something a little more artist friendly. I've heard some good things about [Alienbrain][2] as well, it does seem to have a very slick UI. [1]: http://www.perforce.com/ [2]: http://www.softimage.com/products/alienbrain/
import java.util.*; Map map = new HashMap();
Best practices for development environment and API dev?
|api|development-environment|pipeline|
|api|development-environment|pipeline|
My current employer uses a 3rd party hosted CRM provider and we have a fairly sophisticated integration tier between the two systems. Amongst the capabilities of the CRM provider is for developers to author business logic in a Java like language and on events such as the user clicking a button or submitting a new account into the system, have validation and/or business logic fire off. One of the capabilities that we make use of is for that business code running on the hosted provider to invoke web services that we host. The canonical example is a sales rep entering in a new sales lead and hitting a button to ping our systems to see if we can identify that new lead based on email address, company/first/last name, etc, and if so, return back an internal GUID that represents that individual. This all works for us fine, but we've run into a wall again and again in trying to setup a sane dev environment to work against. So while our use case is a bit nuanced, this can generally apply to any development house that builds APIs for 3rd party consumption: <b>what are some best practices when designing a development pipeline and environment when you're building APIs to be consumed by the outside world?</b> At our office, all our devs are behind a firewall, so code in progress can't be hit by the outside world, in our case the CRM provider. We could poke holes in the firewall but that's less than ideal from a security surface area standpoint. Especially if the # of devs who need to be in a DMZ like area is high. We currently are trying a single dev machine in the DMZ and then remoting into it as needed to do dev work, but that's created a resource scarcity issue if multiple devs need the box, let alone they're making potentially conflicting changes (e.g. different branches). We've considered just mocking/faking incoming requests by building fake clients for these services, but that's a pretty major overhead in building out feature sets (though it does by nature reinforce a testability of our APIs). This also doesn't obviate the fact that sometimes we really do need to diagnose/debug issues coming from the real client itself, not some faked request payload. What have others done in these types of scenarios? In this day and age of mashups, there have to be a lot of folks out there w/ experiences of developing APIs--what's worked (and not worked so) well for the folks out there?
|asp.net-mvc|
The [section on "Why am I getting no hits?"][1] in the Lucene FAQ has some suggestions you might find useful. You're using Field.Index.UN_TOKENIZED, so no Analyzer will be used for indexing (I think). If you're using an Analyzer when you're searching then that might be the root of your problem - the indexing and searching Analyzers should be the same to make sure you get the right hits. [1]: http://wiki.apache.org/lucene-java/LuceneFAQ#head-3558e5121806fb4fce80fc022d889484a9248b71
How do I reference a javascript file?
|c#|asp.net|javascript|
I'm working on a C#/ASP.NET project that has all the javascript files in a /Javascript folder. If I refer to the JS file using this syntax: src="/Javascript/jsfile.js" then the file is correctly picked up if the project is deployed to the root of the URL. However, if this "web site" is deployed to a sub-folder of the main url this won't work. So the solution could be to use relative urls - but there's a problem with that as well because the master pages reference many of the javascript files and these master pages can be used by pages in the root and in subfolders many levels deep. Does anybody have any ideas for resolving this?
|asp.net|javascript|
try `sealed` class B : A { protected sealed override void SpecialRender() { // do stuff } } class C : B protected override void SpecialRender() { // not valid } } Of course, I think C can get around it by being `new`.
Suggestions for migrating ASP.net app from 1.1 forward
|c#|asp.net|silverlight|
I am recently in charge of an older app written in C# using asp.net 1.1. 1. Are there any resources to guide me in converting the application to a newer version of of the .NET Framework. _My main pause is that there are ton's of customized DataGrids in the app as it is written now and since so much of the code needs to be rewritten to use GridViews_ ... 2. is it worth trying to convert the grids in the application to use Silverlight in the attempt to move this code into the future.
|asp.net|silverlight|
it might be that you have a plugin that is misbehaving. Try the [safemode][1] switch to see if this improves performance [1]: http://msdn.microsoft.com/en-us/library/xee0c8y7(VS.80).aspx
This is what I used for Excel 2003: Dictionary<string, string> props = new Dictionary<string, string>(); props["Provider"] = "Microsoft.Jet.OLEDB.4.0"; props["Data Source"] = repFile; props["Extended Properties"] = "Excel 8.0"; StringBuilder sb = new StringBuilder(); foreach (KeyValuePair<string, string> prop in props) { sb.Append(prop.Key); sb.Append('='); sb.Append(prop.Value); sb.Append(';'); } string properties = sb.ToString(); using (OleDbConnection conn = new OleDbConnection(properties)) { conn.Open(); DataSet ds = new DataSet(); string columns = String.Join(",", columnNames.ToArray()); using (OleDbDataAdapter da = new OleDbDataAdapter( "SELECT " + columns + " FROM [" + worksheet + "$]", conn)) { DataTable dt = new DataTable(tableName); da.Fill(dt); ds.Tables.Add(dt); } }
Martin's answer is correct on Windows Server 2003: > You will have to assign the "Lock pages in memory" privilege to any user that runs your > application. This includes administrators > > 1. Select Control Panel -> Administrative Tools -> Local Security Policy > 2. Select Local Policies -> User Rights Assignment > 3. Double click "Lock pages in memory", add users and/or groups > 4. Reboot the machine On Windows Vista you need also make sure that the application is run as Administrator (by right-clicking on the application or the shell and choosing "Run as adminstrator". In addition, it helps to have a freshly booted machine since the large pages can "run out" due to fragmentation of the heap.
One good example is that apparently doing a WMI query for the motherboard manufacturer, and if it returns "Microsoft" you're in a VM. Thought I believe this is only for VMWare. There are likely different ways to tell for each VM host software. This article here <http://blogs.technet.com/jhoward/archive/2005/07/26/407958.aspx> has some good suggestions and links to a couple of ways to detect if you are in a VM (VMWare and VirtualPC at least).
Visual Studio 2005/2008: How to keep first curly brace on same line?
|visual-studio|
Trying to get my css / C# functions to look like this: body { color:#222; } instead of this: body { color:#222; } when I auto-format the code.
How do you build a multi-language web site?
|database|web-applications|jakarta-ee|multilingual|
A friend of mine is now building a web application with J2EE and Struts, and it's going to be prepared to display pages in several languages. I was told that the best way to support a multi-language site is to use a properties file where you store all the strings of your pages, something like: welcome.english = "Welcome!" welcome.spanish = "¡Bienvenido!" ... This solution is ok, but what happens if your site displays news or something like that (a blog)? I mean, content that is not static, that is updated often... The people that keep the site have to write every new entry in each supported language, and store each version of the entry in the database. The application loads only the entries in the user's chosen language. How do you design the database to support this kind of implementation? Thanks.
A more empirical approach is to check for known VM device drivers. You could write WMI queries to locate, say, the VMware display adapter, disk drive, network adapter, etc. This would be suitable if you knew you only had to worry about known VM host types in your environment. Here's [an example of doing this in Perl][1], which could be ported to the language of your choice. [1]: http://talhatariq.wordpress.com/2006/05/14/detecting-virtualization-2/
Login Integration in PHP
|php|authentication|integration|
In my host, I currently have installed 2 wordpress applications, 1 phpBB forum and one MediaWiki. Is there a way to merge the login so that all applications share the same credentials? For instance, I want to register only in my phpBB and then I want to access all other applications with the given username and password. Even if you don't know a unified way, what other login integration do you know of? Pros and cons of each?
When I log into a remote machine using ssh X11 forwarding, Vista pops up a box complaining about a process that died unexpectedly. Once I dismiss the box, everything is fine. So I really don't care if some process died. How do I get Vista to shut up about it? * * * Specifically, the message reads: sh.exe has stopped working So it's not ssh itself that died, but some sub-process. The problem details textbox reads: Problem signature: Problem Event Name: APPCRASH Application Name: sh.exe Application Version: 0.0.0.0 Application Timestamp: 48a031a1 Fault Module Name: comctl32.dll_unloaded Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4549bcb0 Exception Code: c0000005 Exception Offset: 73dc5b17 OS Version: 6.0.6000.2.0.0.768.3 Locale ID: 1033 Additional Information 1: fc4d Additional Information 2: d203a7335117760e7b4d2cf9dc2925f9 Additional Information 3: 1bc1 Additional Information 4: 7bc0b00964c4a1bd48f87b2415df3372 Read our privacy statement: http://go.microsoft.com/fwlink/?linkid=50163&clcid=0x0409 I notice the problem occurs when I use the **-Y** option to enable X11 forwarding in an X terminal under Vista. The dialog box that pops up doesn't automatically gain focus, so pressing Enter serves no purpose. I have to wait for the box to appear, grab it with the mouse, and dismiss it. Even forcing the error to receive focus would be a step in the right direction. * * * Per [DrPizza](#35944) I have sent an [email](http://cygwin.com/ml/cygwin/2008-08/msg00880.html) to the Cygwin mailing list. The trimmed down subject line represents my repeated attempts to bypass an over-aggressive spam filter and highlights the need for something like StackOverflow.
@Alexandru & Jeremy, Thanks for your help. You both get upvotes @Jeremy Using your method I got the following error: > sed: -e expression #1, char 8: > unterminated `s' command If you can make it work I'd accept your answer. (pasting my solution doesn't count)