instruction
stringlengths
0
30k
|unit-testing|wcf|
We have a whole bunch of DLLs that give us access to our database and other applications and services. We've wrapped these DLLs with a thin WCF service layer which our clients then consume. I'm a little unsure on how to write unit tests that only test the WCF service layer. Should I just write unit tests for the DLLs, and integration tests for the WCF services? I'd appreciate any wisdom... I know that if my unit tests actually go to the database they won't actually be true unit tests. I also understand that I don't really need to test the WCF service host in a unit test. So, I'm confused about exactly what to test and how. Thanks in advance!
The maintainability index is not as much a fixed value you look at, it's more of an indication that code is hard to understand, test and/or debug. I usually try to keep high level code (basically anything except for the real plumbing code) above 80, where 90+ would be good. It adds a competitive element to programming as maintainable as possible to me. The code analysis tool really shines in the area of dependencies and number of branches within a method though. More branches means harder testing, which makes it more error prone. Dependencies, same thing. In other people's code I use the maintainability index to spot possible bad parts in the code, so I know where to review it. Also, methods/classes with a high number of lines are an indication of poor code to me (unless it can't be avoided, again, the plumbing works). In the end, I think it mainly depends on how often your code will change. Code that's expected to change a lot has to score higher in maintainability than your typical 'write once' code.
C# Linq Grouping
|c#|linq|
I'm experimenting with Linq and am having trouble figuring out grouping I've gone through several tutorials but for some reason can't figure this out. As an example, say I have a table (SiteStats) with multiple website IDs that stores a count of how many visitors by type have accessed each site in total and for the past 30 days. SiteID.........VisitorType.........Last30........Total .......1.......................1................10.........100 .......1.......................2................40.........140 .......2.......................1................20.........180 etc. In SQL, I can easily get the counts for SiteID 1 with the following: SELECT SiteID, SUM(Last30) AS Last30Sum FROM Sites WHERE SiteID = 1 GROUP BY SiteID and should get a row like SiteID.........Last30Total .......1......................50 However I'm not sure how to get this result using Linq. I've tried var statsRecord = from ss in db.SiteStats where ss.SiteId == siteId group ss by ss.SiteId into ss select ss; but I'm not able to get back the total with something like statsRecord.Last30 Can someone please let me know where I'm going wrong. Any help is appreciated.
How do you use XML::Parser with Style => 'Objects'
|xml|perl|
The manual page for XML::Parser::Style::Objects is horrible. A simple hello world style program would really be helpful. I really wanted to do something like this: (not real code of course) use XML::Parser; my $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode'); my $tree = $p->parsefile('foo.xml'); $tree->doSomething(); MyNode::doSomething() { my $self = shift; print "This is a normal node"; for $kid ($self->Kids) { $kid->doSomething(); } } MyNode::special::doSomething() { my $self = shift; print "This is a special node"; }
Hibernate has a [filter mechanism][1] that may work for you. The filters will rewrite the queries hibernate generates to include an additional clause to limit the rows returned. I'm not aware of anything in hibernate to mask/hide columns. Your database may also have support for this functionality. Oracle, for example, has the [Virtual Private Database][2] (VPD) which will rewrite your queries at the database level. This solution has the added benefit that any external program (e.g. reporting tools) that goes against your db will have your security restrictions enforced. VPD also has support to mask restricted columns with NULLs. Unfortunately, the above solutions have not been adequate to support the security requirements for the types projects I typically work on. There is usually some sort of context that cannot be easily expressed in the above solutions. For example, users can view data that they have created, or that have been been marked as public, or belong to a project which they manage. We typically create query/finder/DAO objects where we pass in the values required to enforce the security and then create the query accordingly. I hope this helps [1]: http://in.relation.to/Bloggers/Hibernate3Filters [2]: http://www.devshed.com/c/a/Oracle/RowLevel-Security-with-Virtual-Private-Database/
|c#|asp.net|session|hacking|yui|
I'm using the Yahoo Uploader, part of the Yahoo UI Library, on my ASP.Net website to allow users to upload files. For those unfamiliar, the uploader works by using a Flash applet to give me more control over the FileOpen dialog. I can specify a filter for file types, allow multiple files to be selected, etc. It's great, but it has the following documented limitation: > Because of a known Flash bug, the Uploader running in Firefox in Windows does not send the correct cookies with the upload; instead of sending Firefox cookies, it sends Internet Explorer’s cookies for the respective domain. As a workaround, we suggest either using a cookieless upload method or appending document.cookie to the upload request. So if a user is using Firefox, I can't rely on cookies to persist their session when they upload a file. I need their session because I need to know who they are! As a workaround, I'm using the Application object thusly: Guid UploadID = Guid.NewGuid(); Application.Add(Guid.ToString(), User); So I'm creating a unique ID and using it as a key to store the Page.User object in the Application scope. I include that ID as a variable in the POST when the file is uploaded. Then, in the handler that accepts the file upload, I grab the User object thusly: IPrincipal User = (IPrincipal)Application[Request.Form["uploadid"]]; So this actually works, but it has two glaring drawbacks: * if IIS, the app pool, or even just the application is restarted between the time the user visits the upload page, and actually uploads a file, their "uploadid" is deleted from application scope and the upload fails because I can't authenticate them. * If I ever scale to a web farm (possibly even a web garden) scenario, this will completely break. I might not be worried, except I do plan on scaling this app in the future. Does anyone have a better way? Is there a way for me to pass the actual ASP.Net session ID in a POST variable, then use that ID at the other end to retrieve the session?
I'm using the Yahoo Uploader, part of the Yahoo UI Library, on my ASP.Net website to allow users to upload files. For those unfamiliar, the uploader works by using a Flash applet to give me more control over the FileOpen dialog. I can specify a filter for file types, allow multiple files to be selected, etc. It's great, but it has the following documented limitation: > Because of a known Flash bug, the Uploader running in Firefox in Windows does not send the correct cookies with the upload; instead of sending Firefox cookies, it sends Internet Explorer’s cookies for the respective domain. As a workaround, we suggest either using a cookieless upload method or appending document.cookie to the upload request. So if a user is using Firefox, I can't rely on cookies to persist their session when they upload a file. I need their session because I need to know who they are! As a workaround, I'm using the Application object thusly: Guid UploadID = Guid.NewGuid(); Application.Add(Guid.ToString(), User); So I'm creating a unique ID and using it as a key to store the Page.User object in the Application scope. I include that ID as a variable in the POST when the file is uploaded. Then, in the handler that accepts the file upload, I grab the User object thusly: IPrincipal User = (IPrincipal)Application[Request.Form["uploadid"]]; So this actually works, but it has two glaring drawbacks: * if IIS, the app pool, or even just the application is restarted between the time the user visits the upload page, and actually uploads a file, their "uploadid" is deleted from application scope and the upload fails because I can't authenticate them. * If I ever scale to a web farm (possibly even a web garden) scenario, this will completely break. I might not be worried, except I do plan on scaling this app in the future. Does anyone have a better way? Is there a way for me to pass the actual ASP.Net session ID in a POST variable, then use that ID at the other end to retrieve the session? **Edit**: I know I can get the session ID through Session.SessionID, and I know how to use YUI to post it to the next page. What I don't know is how to use that SessionID to grab the session from the state server. Yes, I'm using a state server to store the sessions, so they persist application/IIS restarts, and will work in a web farm scenario.
You can BASE64 encode the content of the image directly into the SRC attribute, however, I believe only Firefox will parse this back into an image. What I typically do is a create a very lightweight HTTPHandler to serve the images: using System; using System.Web; namespace Example { public class GetImage : IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.Request.QueryString("id") != null) { Blob = GetBlobFromDataBase(id); context.Response.Clear(); context.Response.ContentType = "image/pjpeg"; context.Response.BinaryWrite(Blob); context.Response.End(); } } public bool IsReusable { get { return false; } } } }
You can BASE64 encode the content of the image directly into the SRC attribute, however, I believe only Firefox will parse this back into an image. What I typically do is a create a very lightweight HTTPHandler to serve the images: using System; using System.Web; namespace Example { public class GetImage : IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.Request.QueryString("id") != null) { Blob = GetBlobFromDataBase(id); context.Response.Clear(); context.Response.ContentType = "image/pjpeg"; context.Response.BinaryWrite(Blob); context.Response.End(); } } public bool IsReusable { get { return false; } } } } You can reference this directly in your img tag: <img src="GetImage.ashx?id=111"/> Or, you could even create a server control that does it for you: using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Example.WebControl { [ToolboxData("<{0}:DatabaseImage runat=server></{0}:DatabaseImage>")] public class DatabaseImage : Control { public int DatabaseId { get { if (ViewState["DatabaseId" + this.ID] == null) return 0; else return ViewState["DataBaseId"]; } set { ViewState["DatabaseId" + this.ID] = value; } } protected override void RenderContents(HtmlTextWriter output) { output.Write("<img src='getImage.ashx?id=" + this.DatabaseId + "'/>"); base.RenderContents(output); } } } This could be used like <cc:DatabaseImage id="db1" DatabaseId="123" runat="server/> And of course, you could set the databaseId in the codebehind as needed.
This is relatively easy: git checkout -b fake_master master # fake_master now points to the same commit as master git branch -D master # get rid of incorrect master git checkout -b master real_master # master now points to your actual master git checkout master # optional -- switch on to your master branch
Generally, no. The stream format for Java serialization is defined [in this document][1], but you need access to the original class definitions (and a Java runtime to load them into) to turn the stream data back into something approaching the original objects. For example, classes may define writeObject() and readObject() methods to customise their own serialized form. It would be much easier to customise the log4j network adapter and replace the raw serialization with some more easily-deserialized form (for example you could use XStream to turn the object into an XML representation) [1]: http://java.sun.com/j2se/1.4/pdf/serial-spec.pdf
Generally, no. The stream format for Java serialization is defined [in this document][1], but you need access to the original class definitions (and a Java runtime to load them into) to turn the stream data back into something approaching the original objects. For example, classes may define writeObject() and readObject() methods to customise their own serialized form. (**edit:** lubos hasko suggests having a little java program to deserialize the objects in front of Python, but the problem is that for this to work, your "little java program" needs to load the same versions of all the same classes that it might deserialize. Which is tricky if you're receiving log messages from one app, and really tricky if you're multiplexing more than one log stream. Either way, it's not going to be a little program any more.) It would be much easier to customise the log4j network adapter and replace the raw serialization with some more easily-deserialized form (for example you could use XStream to turn the object into an XML representation) [1]: http://java.sun.com/j2se/1.4/pdf/serial-spec.pdf
Generally, no. The stream format for Java serialization is defined [in this document][1], but you need access to the original class definitions (and a Java runtime to load them into) to turn the stream data back into something approaching the original objects. For example, classes may define writeObject() and readObject() methods to customise their own serialized form. (**edit:** lubos hasko suggests having a little java program to deserialize the objects in front of Python, but the problem is that for this to work, your "little java program" needs to load the same versions of all the same classes that it might deserialize. Which is tricky if you're receiving log messages from one app, and really tricky if you're multiplexing more than one log stream. Either way, it's not going to be a little program any more. **edit2:** I could be wrong here, I don't know what gets serialized. If it's just log4j classes you should be fine. On the other hand, it's possible to log arbitrary exceptions, and if they get put in the stream as well my point stands.) It would be much easier to customise the log4j network adapter and replace the raw serialization with some more easily-deserialized form (for example you could use XStream to turn the object into an XML representation) [1]: http://java.sun.com/j2se/1.4/pdf/serial-spec.pdf
You'll need a command line tool to zip up the files. I recommend [7-Zip][1] which is free and easy to use. The self-contained command line version (7za.exe) is the most portable choice. Here's a two-line batch file that would zip the log files and delete them afterwards: 7za.exe a -tzip ex%1-logs.zip %2\ex%1*.log del %2\ex%1*.log The first parameter is the 4 digit year-and-month, and the second parameter is the path to the directory containing your logs. For example: `ziplogs.bat 0808 c:\logs` It's possible to get more elaborate (i.e. searching the filenames to determine which months to archive). You might want to check out the Windows [FINDSTR][2] command for searching input text with regular expressions. [1]: http://www.7-zip.org/download.html [2]: http://technet.microsoft.com/en-us/library/bb490907.aspx
logrotate is probably the best solution. Use the file '/etc/logrotate.conf' to change the settings for all your logs. You van change 'weekly' to 'daily' so the logs are rotated every day. Also, you might want to add 'compress' so the archives are compressed. If you don't care about the old logs, you can set rotate 'rotate 4' to something lower.
If you are a one man show and strictly a Microsoft shop, then [SourceGear Vault][1] is definitely a prime candidate for switching too. Features: - Free for Single User, great for you - It uses SQL Server for it's backend, therefore data reliability is huge - It has atomic check-ins, all files checked-in at the same time are arranged in a group and are called a changeset. - VisualStudio integration. - Has a tool for importing from SourceSafe, therefore you can keep your history - The client communicates with the server over HTTP, therefore accessing the source outside the office remotely can be setup very easily and performs well, because they only transfer the deltas of the changes being submitted and received. You can use SSL to secure the connection. I would definately consider this as an option. [1]: http://www.sourcegear.com/vault/index.html
Because Subversion (at least version 1.4 and below) doesn't keep track of what have been merged. For Subversion, merging is basically the same as any commit while on other version control like Git, what have been merged are remembered.
For any CSS3-enabled browser you should be able to get away with input:checked + label { color: white; } for styling your labels. Unfortunately that probably of rules out some of the more popular browsers...
According to the link Matt posted, the DefaultValue attribute doesn't set the default value of the property, it just lets the form designer know that the property *has* a default value. If you change a property from the default value it is shown as bold in the properties window. You can't set a default value using automatic properties - you'll have to do it the old-fashioned way: class MyClass { Color lineColor = SystemColors.InactiveBorder; [DefaultValue(true)] public Color LineColor { get { return lineColor; } set { lineColor = value; } } }
Use of parameterized SQL has other benefits, it reduces CPU overhead (as well as other resources) in Oracle by reducing the amount of work Oracle requires in order to parse the statement. If you do not use parameters (we call them bind variables in Oracle) then "select * from foo where bar='cat'" and "select * from foo where bar='dog'" are treated as separate statements, where as "select * from foo where bar=:b1" is the same statement, meaning things like syntax, validity of objects that are referenced etc...do not need to be checked again. There are occasional problems that arise when using bind variables which usually manifests itself in not getting the most efficient SQL execution plan but there are workarounds for this and these problems really depend on the predicates you are using, indexing and data skew.
ASP.NET Merge: Virtual path 'obal.asax' is not allowed
Reading [this question][1] I found this as (note the quotation marks) "code" to solve the problem (that's perl by the way). 100,{)..3%!'Fizz'*\5%!'Buzz'*+\or}%n* Obviously this is an intellectual example without real (I hope to never see that in real code in my life) implications but, when you have to make the choice, when do you sacrifice code readability for performance? Do you apply just common sense, do you do it always as a last resort? What are your strategies? [1]: http://stackoverflow.com/questions/437/what-is-your-solution-to-the-fizzbuzz-problem
|code-style|performance|
|performance|fix-typo|
|performance|
Reading [this question][1] I found this as (note the quotation marks) "code" to solve the problem (that's perl by the way). 100,{)..3%!'Fizz'*\5%!'Buzz'*+\or}%n* Obviously this is an intelectual example without real (I hope to never see that in real code in my life) implications but, when you have to make the choice, when do you sacrifice code readability for performance? Do you apply just common sense, do you do it always as a last resort? What are your strategies? Edit: I'm sorry, seeing the answers I might have expressed the question badly (English is not my native language). I don't mean performance vs readability only **after** you've written the code, I ask about before you write it as well. Sometimes you can foresee a performance improvement in the future by making some darker design or providing with some properties that will make your class darker. You may decide you will use multiple threads or just a single one because you expect the scalability that such threads may give you, even when that will make the code much more difficult to understand. [1]: http://stackoverflow.com/questions/437/what-is-your-solution-to-the-fizzbuzz-problem
Reading [this question][1] I found this as (note the quotation marks) "code" to solve the problem (that's perl by the way). 100,{)..3%!'Fizz'*\5%!'Buzz'*+\or}%n* Obviously this is an intellectual example without real (I hope to never see that in real code in my life) implications but, when you have to make the choice, when do you sacrifice code readability for performance? Do you apply just common sense, do you do it always as a last resort? What are your strategies? Edit: I'm sorry, seeing the answers I might have expressed the question badly (English is not my native language). I don't mean performance vs readability only **after** you've written the code, I ask about before you write it as well. Sometimes you can foresee a performance improvement in the future by making some darker design or providing with some properties that will make your class darker. You may decide you will use multiple threads or just a single one because you expect the scalability that such threads may give you, even when that will make the code much more difficult to understand. [1]: http://stackoverflow.com/questions/437/what-is-your-solution-to-the-fizzbuzz-problem
Check out the [Dynamic Linq Library][1] from ScottGu's blog. [1]: http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx
It would probably be more work than it's worth, but you can use static C#/VB variables in a SQL CLR UDF, so I think you'd be able to do what you want to do by simply incrementing this variable every time the UDF is called. The static variable would be lost whenever the appdomain unloaded, of course. So if you need continuity of your ID from one day to the next, you'd need a way, on first access of NextId, to poll all of tables that use this ID, to find the highest value.
To answer a non-pointed question, my answer would be "probably NUnit just stays out of your face." **Disclaimer**: I've no actual experience with MS version of xUnit, however I hear problems like 'You need to install the gigantic idea just to run your tests on a separate machine' - which is a complete No-No. Other than that MS has this way of contorting the right path for a newbie via some kind of IDE bell/whistle that runs counter to the whole idea. Like generating tests from classes was one I remember from a year or so back.. that defeats the whole point of *test-driving* - your design is supposed to emerge from tiny steps of RGR: writing a test-make it pass-refactor. If you use that tool - it robs you of the entire experience. I'll stop with my sermon.. now :)
Check [getrusage][1], I think that should solve your problem. [1]: http://linux.die.net/man/2/getrusage
If you are a one-man team, then your only option is really WiiWare. At $2000 for the kit, you picked the right console. That's a tiny fraction of the cost of a 360 or PS3 dev kit. You do have to have your own business. You also have to get your game rated by the [ESRB](http://www.esrb.org/) which will put you back another $2500. Your game also has to be really good. In the end you could spend all the money and time and have Nintendo refuse to publish your game for any reason whatsoever.
Try: #include <unistd.h> int main(){ nice(-10); } Note that you must be running as superuser for this to work. (for more info, type 'man 2 nice' at a prompt.
Try: #include <sys/time.h> #include <sys/resource.h> int main(){ setpriority(PRIO_PROCESS, 0, -20); } Note that you must be running as superuser for this to work. (for more info, type 'man setpriority' at a prompt.)
Cannot access a disposed object - How to fix?
|.net|winforms|vb.net|
In a VB.NET WinForms project I get an exception "Cannot access a disposed object" when closing a form. It occurs very rarely and I cannot recreate it on demand. The stack trace looks like this: <br><br>`Cannot access a disposed object. Object name: 'dbiSchedule'. at System.Windows.Forms.Control.CreateHandle() at System.Windows.Forms.Control.get_Handle() at System.Windows.Forms.Control.PointToScreen(Point p) at Dbi.WinControl.Schedule.dbiSchedule.a(Boolean A_0) at Dbi.WinControl.Schedule.dbiSchedule.a(Object A_0, EventArgs A_1) at System.Windows.Forms.Timer.OnTick(EventArgs e) at System.Windows.Forms.Timer.TimerNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) `<br><br> The dbiSchedule is a schedule control from Dbi-tech. There is a timer on the form that updates the schedule on the screen every few minutes.<br> Any ideas what is causing the exception and how I might go about fixing it? or even just being able to recreate it on demand?
Start on `master`, create a branch called `in-progress`, then reset `master` to an earlier commit. $ git branch in-progress $ git reset --hard HEAD^
Kieron's answer is the best one so far. If you have problems because the first program is buffering its out you can use the unbuffer program: unbuffer <command> | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }' It's installed by default on most linux systems. If you need to build it yourself it is part of the expect package <http://expect.nist.gov>
"Silent" Printing in a Web Application
|printing|internet|
I'm working on a web application that needs to prints silently -- that is without user involvement. What's the best way to accomplish this? It doesn't like it can be done with strictly with Javascript, nor Flash and/or AIR. The closest I've seen involves a Java applet. I can understand why it would a Bad Idea for just any website to be able to do this. This specific instance is for an internal application, and it's perfectly acceptable if the user needs to add the URL to a trusted site list, install an addon, etc.
The best way I have found is to use the VB function `FileSystem.DeleteFile`. Microsoft.VisualBasic.FileIO.FileSystem.DeleteFile(file.FullName, Microsoft.VisualBasic.FileIO.UIOption.OnlyErrorDialogs, Microsoft.VisualBasic.FileIO.RecycleOption.SendToRecycleBin); It requires adding `Microsoft.VisualBasic` as a reference, but this is part of the .NET framework and so isn't an extra dependency. Alternate solutions require a P/Invoke to [SHFileOperation][1], as well as defining all the various structures/constants. Including `Microsoft.VisualBasic` is much neater by comparison. [1]: http://msdn.microsoft.com/en-us/library/bb762164(VS.85).aspx "SHFileOperation"
The best way I have found is to use the VB function `FileSystem.DeleteFile`. Microsoft.VisualBasic.FileIO.FileSystem.DeleteFile(file.FullName, Microsoft.VisualBasic.FileIO.UIOption.OnlyErrorDialogs, Microsoft.VisualBasic.FileIO.RecycleOption.SendToRecycleBin); It requires adding `Microsoft.VisualBasic` as a reference, but this is part of the .NET framework and so isn't an extra dependency. Alternate solutions require a P/Invoke to [SHFileOperation][1], as well as defining all the various structures/constants. Including `Microsoft.VisualBasic` is much neater by comparison. [1]: http://msdn.microsoft.com/en-us/library/bb762164.aspx "SHFileOperation"
[annotate][1], available via that link or as `annotate-output` in the Debian `devscripts` package. $ echo -e "a\nb\nc" > lines $ annotate-output cat lines 17:00:47 I: Started cat lines 17:00:47 O: a 17:00:47 O: b 17:00:47 O: c 17:00:47 I: Finished with exitcode 0 [1]: http://jeroen.a-eskwadraat.nl/sw/annotate/annotate
Is the dialog shown using ShowModal or just Show? You should probably set the PopupMode property correct of the your dialog. pmAuto would probably your best choice. Also see if you need to set the PopupParent property.
|asp.net|msbuild|merge|
I am doing a Web Deployment of my website and I have the merge assemblies property set to true. For some reason I get the following error. aspnet_merge : error occurred: An error occurred when merging assemblies: The relative virtual path 'obal.asax' is not allowed here. It seems to have something to do with the Global.asax, but I'm not really sure why its getting truncated. My code compiles locally fine, but its only the merge that is messing up. Any ideas?
Yes, it is possible. The [manual](http://research.microsoft.com/fsharp/manual/spec2.aspx#_Toc207785549) has this example: > List.map (fun x -> x % 2 = 0) [1 .. 5];; val it : bool list = [false; true; false; true; false]
I would use a GridView (or DataGrid, if you are using an older version of ASP.NET). <asp:GridView ID="gvBooks" runat="server" AutoGenerateColumns="False"> <Columns> <asp:BoundField HeaderText="Published" DataField="Published" /> <asp:BoundField HeaderText="Title" DataField="Title" /> <asp:BoundField HeaderText="Author" DataField="Author" /> <asp:BoundField HeaderText="Price" DataField="Price" /> </Columns> </asp:GridView> With some code-behind: Private Sub gvBooksRowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvBooks.RowDataBound Select Case e.Row.RowType Case DataControlRowType.DataRow ''' Your code here ''' End Select End Sub You can bind it in a similar way. The RowDataBound event is what you need.
Try the kind of recurrences that can give variously simple or chaotic series based on the part of their phase spaces you explore: the simplest I can think of is the logistic map x(n+1) = r * x(n) * ( 1 - x(n) ). With r approx. 3.57 you get chaotic results that depend on the initial point. If you graph this versus time you can get lots of different series just by manipulating that parameter r. If you were to graph it as x(n+1) v. x(n) without connecting dots, you see a simple parabola take shape over time. This is one of the most basic functions from chaos theory and trying more interesting polynomials, graphing them as x(n+1) v. x(n) and watching a shape form, and then graphing x(n) v. n is a fun and interesting way to create series. Graphing x(n+1) v. x(n) makes it quickly obvious if you're only visiting a small number of points. Deeper recurrences become more interesting as well, and using different values of x(0) to check on sensitivity to initial conditions is also of interest. But for simplicity, control by a single parameter, and ability to find something to read about your recurrence, it'll be hard to beat the logistic map. I recommend: <http://en.wikipedia.org/wiki/Logistic_map>. It has a nice description of what to expect from different values of r.
how about <[^a](.|\n)+?>
Pass functions in F#
|f#|
Is it possible to pass a reference to a function to another function in F#? Specifically, I'd like to pass lambda functions like foo(fun x -> x ** 3)
Check out the [Dynamic Linq Library][1] from ScottGu's blog: > For example, below is a standard > type-safe LINQ to SQL VB query that > retrieves data from a Northwind > database and displays it in a ASP.NET > GridView control:![alt text][2] > > Using the LINQ DynamicQuery library I > could re-write the above query > expression instead like so:![alt text][3] > > > > Notice how the conditional-where > clause and sort-orderby clause now > take string expressions instead of > code expressions. Because they are > late-bound strings I can dynamically > construct them. For example: I could > provide UI to an end-user business > analyst using my application that > enables them to construct queries on > their own (including arbitrary > conditional clauses). [1]: http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx [2]: http://www.scottgu.com/blogposts/dynquery/step1.png [3]: http://www.scottgu.com/blogposts/dynquery/step2.png
See the documentation for [XMLReader.setContentHandler()](http://www.saxproject.org/apidoc/org/xml/sax/XMLReader.html#setContentHandler(org.xml.sax.ContentHandler)), it says: > Applications may register a new or different handler in the middle of a parse, and the SAX parser must begin using the new handler immediately. Thus, you should be able to create a `SelectorContentHandler` that consumes events until the first `startElement` event, based on that changes the `ContentHandler` on the XML reader, and passes the first start element event to the new content handler. You just have to pass the `XMLReader` to the `SelectorContentHandler` in the constructor. If you need **all** the events to be passes to the vocabulary specific content handler, `SelectorContentHandler` has to cache the events and then pass them, but in most cases this is not needed. On a side note, I've lately used [XOM](http://xom.nu/) in almost all my projects to handle XML ja thus far performance hasn't been the issue.
See the documentation for <a href="http://www.saxproject.org/apidoc/org/xml/sax/XMLReader.html#setContentHandler(org.xml.sax.ContentHandler)">XMLReader.setContentHandler\()</a>, it says: > Applications may register a new or different handler in the middle of a parse, and the SAX parser must begin using the new handler immediately. Thus, you should be able to create a `SelectorContentHandler` that consumes events until the first `startElement` event, based on that changes the `ContentHandler` on the XML reader, and passes the first start element event to the new content handler. You just have to pass the `XMLReader` to the `SelectorContentHandler` in the constructor. If you need **all** the events to be passes to the vocabulary specific content handler, `SelectorContentHandler` has to cache the events and then pass them, but in most cases this is not needed. On a side note, I've lately used [XOM](http://xom.nu/) in almost all my projects to handle XML ja thus far performance hasn't been the issue.
We are currently in the process of doing this at my day job. We are actually making the switch over in about a month. I am a main part of the migration and a big part of why we are getting off of SourceSafe. To help in the migration, I used the [Visual Studio® Team System 2008 Team Foundation Server and Team Suite VPC Image][1]. It was very useful. Right off the bat, the image contains a full working TFS installation for you to play and demo with. It also includes Hands on Labs and one of the labs is running the VSS -> TFS migration tool. If you have an MSDN subscription, once you have played with the image, the next step would be to install the TFS Small Team edition that comes with your subscription. One thing to note is to make sure you get the latest Service Packs for Visual Studio 2008 and the .NET Framework installed on the image. The service packs fixed some annoying bugs and it definately increased the usability of the system. We have a farely large SourceSafe database with about 90+ projects and the migration tool took about 32 hours to complete. First I made a backup of our sourcesafe database for testing. Then I made the migration on the test sourcesafe database. Afterwards, I checked the source tree in TFS and everything transferred fine. We kept all the history for our source files from VSS which was great. No need to keep that stinking VSS database around after we go live. We are taking the migration in steps. First the source control and letting our developers get use to using it. Then after that we will be migrating the QA and Business Analysts over to use the Work Item tracking features. My advice is to take the migration in steps. Don't do too much at one time. Give time for people who will be using the system to train up. [1]: http://www.microsoft.com/downloads/details.aspx?FamilyID=c7a809d8-8c9f-439f-8147-948bc6957812&displaylang=en
Assume all users are malicious. Sanitize all input as soon as possible. Full stop.
As far as I can see [Changing the error message][1] has no way of changing the message of a specific error. Plus the manual makes it look like like this is a function belonging to Zend_Form, but I get method not found when using it on an instance of Zend_Form. And example of the useage would be really great. [1]: http://framework.zend.com/manual/en/zend.form.forms.html#zend.form.forms.validation.errors/
Perhaps the community can come up with some good text-based CAPTCHAs? We can then come up with a good list based on those with the most votes. [What is a good text-based CAPTCHA?][1] [1]: http://beta.stackoverflow.com/questions/34873/what-is-a-good-text-based-captcha
Perhaps the community can come up with some good text-based CAPTCHAs? We can then come up with a good list based on those with the most votes.
I sanitize my data right before I do any processing on it. I may need to take the First and Last name fields and concatenate them into a third field that gets inserted to the database. I'm going to sanitize the input before I even do the concatenation so I don't get any kind of processing or insertion errors. The sooner the better. Even using Javascript on the front end (in a web setup) is ideal because that will occur without any data going to the server to begin with. The scary part is that you might even want to start sanitizing data coming out of your database as well. The recent surge of ASPRox SQL Injection attacks that have been going around are doubly lethal because it will infect all database tables in a given database. If your database is hosted somewhere where there are multiple accounts being hosted in the same database, your data becomes corrupted because of somebody else's mistake, but now you've joined the ranks of hosting malware to your visitors due to no initial fault of your own. Sure this makes for a whole lot of work up front, but if the data is critical, then it is a worthy investment.
Random data in Unit Tests?
|unit-testing|
I have a coworker who writes unit tests for objects which fill their fields with random data. His reason is that it gives a wider range of testing, since it will test a lot of different values, whereas a normal test only uses a single static value. I've given him a number of different reasons against this, the main ones being: - random values means the test isn't truly repeatable (which also means that if the test can randomly fail, it can do so on the build server and break the build) - if it's a random value and the test fails, we need to a) fix the object and b) force ourselves to test for that value every time, so we know it works, but since it's random we don't know what the value was Another coworker added: - If I am testing an exception, random values will not ensure that the test ends up in the expected state - random data is used for flushing out a system and load testing, not for unit tests Can anyone else add additional reasons I can give him to get him to stop doing this? (Or alternately, is this an acceptable method of writing unit tests, and I and my other coworker are wrong?)
|sql|mysql|performance|
i am having serious performance problems (queries up to 55 seconds) while running a mysql query since i added a group_concat clause. my query looks like: select ... group_concat(distinct category.name) .... from page where left outer join page_category on page.id = page_category.page_id left outer join category on page_category.category_id = category.id .... group by page.id As mentioned in the query, among others, my application has three tables page, category and page_category. A page can be associated with none or multiple categories. Currently page, page_category and category have 9,460, 20,241 and 10 entries each. Can anyone help me to improve my query to avoid such performance problems?
You should not use "this" unless you absolutely must. There IS a penalty associated with unnecessary verbosity. You should strive for code that is exactly as long as it needs to be, and no longer.
I was missing an index in the *page_category.page_id* field. That solve the problem.
Improving performance with OPC tags
|performance|automation|
I am working with a PC based automation software package called Think'n'Do created by <a href="http://www.phoenixcontact.com">Phoenix Contact</a> It does real time processing, read inputs/ control logic / write outputs all done in a maximum of 50ms. We have an OPC server that is reading/writing tags from a PLC every 10ms. There is a long delay in writing a tag to the PLC and reading back the written value (Think'n'Do (50ms) > OPC Server (10ms) > PLC (10ms) > OPC Server (10ms) > Think'n'Do (50ms) ) that process takes up to 6 seconds to complete when it should by my math only take 130ms. Any ideas of where to look or why it might be taking so much longer would be helpful.
@palmsey Pretty much, but the variation on that pattern that I've seen most often gets rid of `ObjectAID` et al. `ParentID` becomes both the PK and the FK to `Parents`. That gets you something like: * `Parents` * `ParentID` * `ObjectA` * `ParentID` (FK and PK) * `ColumnFromA NOT NULL` * `ObjectB` * `ParentID` (FK and PK) * `ColumnFromB NOT NULL` `Comments` would remain the same. Then you just need to constrain ID generation so that you don't accidentally wind up with an `ObjectA` row and an `ObjectB` row that both point to the same `Parents` row; the easiest way to do that is to use the same sequence (or whatever) that you're using for `Parents` for `ObjectA` and `ObjectB`. You also see a lot of schemas with something like: * `Parents` * `ID` * `SubclassDiscriminator` * `ColumnFromA (nullable)` * `ColumnFromB (nullable)` and `Comments` would remain unchanged. But now you can't enforce all of your business constraints (the subclasses' properties are all nullable) without writing triggers or doing it at a different layer.
To expand on what stjohnroe has said it looks like the reason you are getting different results is because two different tables with the same name exist on different schemas. By adding the myDB username to the front of the query you now access the table with the data you are expecting. (Since you say the data doesn't belong on "myDB" this probably means the app/proc that is writing the data is writing to the wrong table too). The resolution is: 1. If the table really doesn't belong on "myDB" then drop it for tidyness sake (now you may get 904 table not found errors when you run your code) 2. Create a synonym to the schema and table you really want to access (eg CREATE SYNONYM myTable FOR aschema.myTable;) 3. Don't forget to grant access rights from the schema that owns the table (eg: GRANT SELECT,INSERT,DELETE ON myTable TO myDB; (here myDB refers to the user/schema))
AFAIK there is no elegant way to do what you are looking to.. VS will always end up copying it. I think to be honest it may not be a great idea.. Obviously you want to share the lowest common code, but a whole MasterPage?.. Sounds like you could be asking for trouble since one minor change could have such an impact on one or more applications.. I would suggest instead seperating out the good bits of functionality into components/controls and deploying them.
Add the following code to the `<head>` <!--[if lte IE 6]> <style type="text/css"> html, body { height: 100%; overflow: auto; } .ie6fixed { position: absolute; } </style> <![endif]--> Add the `ie6fixed` CSS class to whatever you want to be `position: fixed;`
[Dynamically Composing Expression Predicates][1] [1]: http://www.albahari.com/nutshell/predicatebuilder.html
I've been working on a similar script here: <http://github.com/ghewgill/vim-scmdiff> (in fact, they may have the same ancestry). I haven't used scmdiff with cvs, but it should do a diff against the branch you have checked out. You can also specify that you want to diff against a particular revision (with :D _revision_ ). Hopefully this helps, and feel free to contribute if you've got improvements!
I've been working on a similar script here: <http://github.com/ghewgill/vim-scmdiff> (in fact, they may have the same ancestry). I haven't used scmdiff with cvs, but it should do a diff against the branch you have checked out. You can also specify that you want to diff against a particular revision (with :D _revision_ ). Hopefully this helps, and feel free to contribute if you've got improvements! @braklet: Thanks for the changes. I was uncomfortable with C-d as a key binding choice so C-h might be doable. However, it would be super annoying if one's key mapping mapped Backspace to C-h. Any suggestions? And, I wonder why the cvs commands didn't work in relative dirs for you. That's odd.
1) in linux this is a function of your desktop environment rather than the os itself. 2) GNOME and KDE have different methods to accomplish this. 3) There's nothing stopping you from doing it both ways.
I would suggest mocking out your calls to the database. Mocks are basically objects that look like the object you are trying to call a method on, in the sense that they have the same properties, methods, etc. available to caller. But instead of performing whatever action they are programmed to do when a particular method is called, it skips that altogether, and just returns a result. That result is typically defined by you ahead of time. In order to set up your objects for mocking, you probably need to use some sort of inversion of control/ dependency injection pattern, as in the following pseudo-code: class Bar { private FooDataProvider _dataProvider; public instantiate(FooDataProvider dataProvider) { _dataProvider = dataProvider; } public getAllFoos() { ##instead of calling Foo.GetAll() here, we are introducing an extra layer of abstraction return _dataProvider.GetAllFoos(); } } class FooDataProvider { public Foo[] GetAllFoos() { return Foo.GetAll(); } } Now in your unit test, you create a mock of FooDataProvider, which allows you to call the method GetAllFoos without having to actually hit the database. class BarTests { public TestGetAllFoos() { #here we set up our mock FooDataProvider mockRepository = MockingFramework.new() mockFooDataProvider = mockRepository.CreateMockOfType(FooDataProvider); #create a new array of Foo objects testFooArray = new Foo[] {Foo.new(), Foo.new(), Foo.new()} #the next statement will cause testFooArray to be returned every time we call FooDAtaProvider.GetAllFoos, instead of calling to the database and returning whatever is in there #ExpectCallTo and Returns are methods provided by our imaginary mocking framework ExpectCallTo(mockFooDataProvider.GetAllFoos).Returns(testFooArray) #now begins our actual unit test testBar = new Bar(mockFooDataProvider) baz = testBar.GetAllFoos() #baz should now equal the testFooArray object we created earlier Assert.AreEqual(3, baz.length) } } A common mocking scenario, in a nutshell. Of course you will still probably want to unit test your actual database calls too, for which you will need to hit the database.
|unit-testing|tdd|mocks|
|unit-testing|tdd|mocking|
I prefer PHPUnit now, but when I started out I used SimpleTest as I didn't always have access to the command line. SimpleTest is nice, but the only thing it really has over PHPUnit, in my opinion, is the web runner. The reasons I like PHPUnit are that it integrates with other PHP developer tools such as phing, phpUnderControl, and Xinc. As of version 3.0 it has mocking support, is being actively developed, and the documentation is excellent. Really the only way to answer this question for yourself is to try both out for a time, and see which fits your style better.
I prefer PHPUnit now, but when I started out I used SimpleTest as I didn't always have access to the command line. SimpleTest is nice, but the only thing it really has over PHPUnit, in my opinion, is the web runner. The reasons I like PHPUnit are that it integrates with other PHP developer tools such as [phing][1], [phpUnderControl][2], and [Xinc][3]. As of version 3.0 it [has mocking][4] support, is being actively developed, and the documentation is excellent. Really the only way to answer this question for yourself is to try both out for a time, and see which fits your style better. [1]: http://phing.info "Phing" [2]: http://www.phpundercontrol.org/about.html "phpUnderControl" [3]: http://code.google.com/p/xinc/ "Xinc" [4]: http://www.phpunit.de/pocket_guide/3.2/en/mock-objects.html "PHPUnit Mock Objects"
Use the [flock](http://perldoc.perl.org/functions/flock.html) Luke. **Edit:** [This](http://www.perlmonks.org/?node_id=7058) is a good explanation.
flock creates Unix-style file locks, and is available on most OS's Perl runs on. However flock's locks are advisory only. edit: emphasized that flock is portable
Thanks for the feedback Nick. I've pretty much gotten the profile and connection management working. The trick is figuring out which parts of the Native Wifi API are **not** supported on XP. Fortunately, the [Managed Wifi API][1] has connect/disconnect notification events that do work on XP ([NetworkChange][2] also gives similar change events). [1]: http://www.codeplex.com/managedwifi [2]: http://msdn.microsoft.com/en-us/library/system.net.networkinformation.networkchange.aspx
Running SVN under apache really isn't that hard. And you can use [mod_auth_sspi][1] to integrate with active directory. [1]: http://www.deadbeef.com/index.php/mod_auth_sspi
@Martin That works for local forms that open in InfoPath. Nathan was asking about web-enabled forms. ActiveX controls are disabled for web forms, as evidenced by the informational label at the bottom of the design controls when the form compatability has been set to the web. Now, I will admit that I know nothing about the HTML tags to play audio in a browser, but I have something else that might work. I had an InfoPath form that I needed to dynamically load an image into for a web-enabled form. Similar to the ActiveX issue, the Picture control was also disabled. What I did was put some managed code behind the form and execute the following when the form loaded. public void FormEvents-Loading(object sender, LoadingEventArgs e) { string imgPath = "http://yoursite/yourimage.jpeg"; XPathNodeIterator xpni = MainDataSource.CreateNavigator().SelectSingleNode("/my:FormName/my:RichTextControlName", NamespaceManager).SelectChildren(XPathNodeType.All); xpni.Current.InnerXml = "<img xmlns=\"http://www.w3.org/1999/xhtml\" src=\"" + filePath + "\" width=\"200px\" height=\"55px\" />"; } I don't see why you couldn't take the same approach and load audio rather than an image. Note: Apparently markdown doesn't like underscores. Replace the - with an underscore for FormEvents-Loading.
I have to be honest, I am kinda thinking out loud here.. But could it not be done with an applet or some sort (be it Java or whatever) that is given trusted permissions (such as that within the Intranet zone) or something? May be worth investigating what permissions can be given to each zone? Following a Google, I think you definately have a challenge, so far most of the articles I have seen involve printing to [printers connected to the server](http://aspalliance.com/462). If its internal, would it be possible to route printing from the server to department/user printers or something?
We use Firebugs console.log, console.time and console.timeEnd (I think) a lot. Firebug also has a built in profiler.