instruction
stringlengths 0
30k
⌀ |
---|
I have some UI in VB 2005 that looks great in XP Style, but goes hideous in Classic Style.
Any ideas about how to detect which mode the user is in and re-format the forms on the fly?
Thanks in Advance!
-Chris
|
|vb.net|windows-xp|appearance|
|
I have some UI in VB 2005 that looks great in XP Style, but goes hideous in Classic Style.
Any ideas about how to detect which mode the user is in and re-format the forms on the fly?
Thanks in Advance!
-Chris
---
Post Answer Edit:
Thanks Daniel, looks like this will work. I'm using the first solution you posted with the GetCurrentThemeName() function.
I'm doing the following:
**Function Declaration:**
<pre><code> Private Declare Unicode Function GetCurrentThemeName Lib "uxtheme" (ByVal stringThemeName As System.Text.StringBuilder, ByVal lengthThemeName As Integer, ByVal stringColorName As System.Text.StringBuilder, ByVal lengthColorName As Integer, ByVal stringSizeName As System.Text.StringBuilder, ByVal lengthSizeName As Integer) As Int32
</pre></code>
***Code Body:***
<pre><code>
Dim stringThemeName As New System.Text.StringBuilder(260)
Dim stringColorName As New System.Text.StringBuilder(260)
Dim stringSizeName As New System.Text.StringBuilder(260)
GetCurrentThemeName(stringThemeName, 260, stringColorName, 260, stringSizeName, 260)
MsgBox(stringThemeName.ToString)
</pre></code>
The MessageBox comes up Empty when i'm in Windows Classic Style/theme, and Comes up with "C:\WINDOWS\resources\Themes\luna\luna.msstyles" if it's in Windows XP style/theme. I'll have to do a little more checking to see what happens if the user sets another theme than these two, but shouldn't be a big issue.
|
For learning PowerShell, there are a number of great resources
1. Technet Virtual Labs ([Introduction to Windows PowerShell][1] and [Writing Scripts with Windows PowerShell][2])
2. PowerShellCommunity.org - Forums, blogs, script repository
3. #powershell on irc.freenode.net
4. PowerShell podcasts - PowerScripting.net and Get-Scripting.blogspot.com
For IDE style environments, you have PowerShell Analyzer (free) and PowerGUI (free), PowerShell Plus (commercial), PrimalScript (commercial), and Admin Script Editor (commerical).
I use PowerShell for everything that I can. Right now, I'm looking at [Psake][3], a PowerShell based build script environment. I use if for managing my Active Directory, Hyper-V, Twitter, some keyboard automation (hosting PowerShell in a winforms app to grab keystrokes), and a ton of other stuff. Another cool project I have to check out is [PSExpect][4] for testing. I also use it for database access - monitoring changes made to rows in a database by applications. It is also integrated in to my network monitoring solution.
I am also looking to use PowerShell as a scripting engine for a project I am working on.
If you are just learning PowerShell, I would focus on V1. As you get more comfortable, take a look at the CTP, but too much can change from the CTP to what is actually released as V2 to make that your learning tool.
Good luck!
[1]: https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032314395&EventCategory=3
[2]: http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032320637&culture=en-us
[3]: http://codebetter.com/blogs/james.kovacs/archive/2008/06/27/introducing-psake.aspx
[4]: http://codeplex.com/psexpect
|
What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this:
EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1'
The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues.
|
What is best practice for FTP from a SQL Server 2005 stored procedure?
|
Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache:
<ul>
<li><b>Accessing the Data Base</b> where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with <a href="http://adodb.sourceforge.net/">AdoDB</a> for example.)</li>
<li><b>Extracting calculations from loops in the code</b> so you don't compute the same value multiple times.</li>
<li><b>Precompiling the PHP code</b> with an extension like <a href="http://pecl.php.net/package/APC">APC Cache</a>. This way you don't have to compile the same PHP code for every request.</i>
<li><b>The page sent to the user</b> making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like <a href="http://www.squid-cache.org/">Squid</a>.
</ul>
From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like <a href="http://xdebug.org/">XDebug</a>.
Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write once"
Use a simple tool like <a href="http://developer.yahoo.com/yslow/">YSlow</a> to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript)
And the next time, consider the CACHE-thing an integral part of your architecture.
|
Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache:
<ul>
<li><b>Accessing the Data Base</b> where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with <a href="http://adodb.sourceforge.net/">AdoDB</a> for example.)</li>
<li><b>Extracting calculations from loops in the code</b> so you don't compute the same value multiple times.</li>
<li><b>Precompiling the PHP code</b> with an extension like <a href="http://pecl.php.net/package/APC">APC Cache</a>. This way you don't have to compile the same PHP code for every request.</li>
<li><b>The page sent to the user</b> making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like <a href="http://www.squid-cache.org/">Squid</a>.</li>
<li><b>Prefetching</b> and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. </li>
</ul>
From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like <a href="http://xdebug.org/">XDebug</a>.
Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once"
Use a simple tool like <a href="http://developer.yahoo.com/yslow/">YSlow</a> to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript)
And the next time, consider the CACHE-thing an integral part of your architecture.
|
One of the IIS developers has an excellent walkthrough here:
http://blogs.iis.net/bills/archive/2006/10/31/PHP-on-IIS.aspx
However, for the love of god why?
|
>Updating all rows of the customer table because you forgot to add the where clause.
That was exactly i did :| . I had updated the password column for all users to a sample string i had typed onto the console. The worst part of it was i was accessing the production server and i was checking out some queries when i did this. My seniors then had to revert an old backup and had to field some calls from some really disgruntled customers. Ofcourse there is another time when i did use the delete statement, which i don't even want to talk about ;-)
|
Using an ocx in a console application
|
|visual-c++|console|c++|
|
I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete.
|
|c++|visual-c++|console|activex|
|
An interface is used for polymorphism, which applies to Objects, not types. Therefore (as already noted) it makes no sense to have an static interface member.
|
I'm not using it, but I've seen demos of it. It's very nice.
Boo has a lot of interesting extensibility points in parsing and interpreting the language itself that make it ideal for writing frameworks like Specter. The end result is much nicer looking than you'd be able to get with languages like C#.
Unfortunately, the fact that Boo isn't "in the box" and can't simply be something you check into your source tree and use really holds it back here. It's a much heavier adoption cost than just picking a framework like NSpec.
|
An other alternative would be (if your RDBMS supports it) to use columns of type array. While this breaks the normalization rules, it can be useful in situations like this. One database which I know about that has arrays is PostgreSQL.
|
Perhaps a code example would help, I'm going to use C#, but you should be able to follow along.
Lets pretend we have an interface called IPayable
public interface IPayable
{
public Pay(double amount);
}
Now, we have two concrete classes that implement this interface:
public class BusinessAccount : IPayable
{
public void Pay(double amount)
{
//Logic
}
}
public class CustomerAccount : IPayable
{
public void Pay(double amount)
{
//Logic
}
}
Now, lets pretend we have a collection of various accounts, to do this we will use a generic list of the type IPayable
List<IPayable> accountsToPay = new List<IPayable>();
accountsToPay.add(new CustomerAccount());
accountsToPay.add(new BusinessAccount());
Now, we want to pay $50.00 to all those accounts:
foreach (IPayable account in accountsToPay)
{
account.Pay(50.00);
}
So now you see how interfaces are incredibly useful.
They are used on instantiated objects only. Not on static classes.
If you had made pay static, when looping through the IPayable's in accountsToPay there would be no way to figure out if it should call pay on BusinessAcount or CustomerAccount.
|
As much as I despise Crystal Reports (we describe digging deep into it the seven layers of Crystal hell) -- it seems to be the best/most-flexible tool for the job. I hope someone comes along and knocks them off the block though.
Microsoft Reporting Services is an alternative, but didn't have the features we needed.
|
If you want to search for text (i.e. what is being said) inside an audio stream you would have to process it with some kind of speech recognition algorithm and store the text as meta data associated with the files. For video you could also do text recognition for text inside the video. <a href="http://www.evernote.com">Evernote</a> already does this for text inside image files, but has no support for audio as far as I know.
Something similar is possible when using audio to search for audio. I don't know the details of these algorithms, but I'm guessing they involve some kind of frequency analysis. <a href="http://www.shazam.com/music/portal">Shazam</a> is using this kind of technology to identify songs based on audio clips.
Here are some Wikipedia articles that may be useful:
* <a href="http://en.wikipedia.org/wiki/Speech_recognition">Speech recognition</a>
* <a href="http://en.wikipedia.org/wiki/Fast_Fourier_transform">Fast Fourier transform</a>
* <a href="http://en.wikipedia.org/wiki/Frequency_analysis">Frequency analysis</a>
* <a href="http://en.wikipedia.org/wiki/Optical_character_recognition">Optical character recognition (OCR)</a>
|
If you want to search for text (i.e. what is being said) inside an audio stream you would have to process it with some kind of speech recognition algorithm and store the text as meta data associated with the files. For video you could also do text recognition for text inside the video. <a href="http://www.evernote.com">Evernote</a> already does this for text inside image files, but has no support for audio as far as I know.
Something similar is possible when using audio to search for audio. I don't know the details of these algorithms, but I'm guessing they involve some kind of frequency analysis. <a href="http://www.shazam.com/music/portal">Shazam</a> is using this kind of technology to identify songs based on audio clips.
Here are some Wikipedia articles that may be useful:
* <a href="http://en.wikipedia.org/wiki/Speech_recognition">Speech recognition</a>
* <a href="http://en.wikipedia.org/wiki/Fast_Fourier_transform">Fast Fourier transform</a>
* <a href="http://en.wikipedia.org/wiki/Frequency_spectrum">Frequency analysis (frequency spectrum)</a>
* <a href="http://en.wikipedia.org/wiki/Optical_character_recognition">Optical character recognition (OCR)</a>
|
ActiveReports and DevExpress' reporting tools are both pretty good. The ReportViewer control works too (the price is right), but I find it more difficult to use. And SSRS reports can be embedded into your ASP.Net apps as well.
|
I would suggest taking a look at MS SSRS (Microsoft SQL Server Reporting Services).
|
The key reason to separating internal business objects from the data contracts/message contracts is that you don't want internal changes to your app to necessarily change the service contract. If you're creating versioned web services (with more than 1 version of the implemented interfaces) then you often have a single version of your apps business objects with more than 1 version of the data contract/message contract objects.
In addition, in complex Enterprise Integration situations you often have a canonical data format (Data and Message contracts) which is shared by a number of applications, which forces each application to map the canonical data format to its internal object model.
If you want a tool to help with the nitty gritty of separating data contract/message contract etc. then check out Microsoft's Web Services Software Factory http://msdn.microsoft.com/en-us/library/cc487895.aspx which has some good recipes for solving the WCF plumbing.
In regards to excpetions, WCF automatically wraps all exceptions in FaultExceptions, which are serialized as wire-format faults.
It's also possible to throw generic Fault Exceptions which allows you to specify additional details to be included with the serialized fault. Since the faults thrown by a web service operation are part of its contract it's a good idea to declare the faults on the operation declaration:
[FaultContract(typeof(AuthenticationFault))]
[FaultContract(typeof(AuthorizationFault))]
StoreLocationResponse StoreLocation(StoreLocationRequest request);
Both the AuthenticationFault and AuthorizationFault types represent the additional details to be serialized and sent over the wire and can be thrown as follows:
throw new FaultException<AuthenticationFault>(new AuthenticationFault());
If you want more details then shout; I've been living and breathing this stuff for so long I almost making a living doing it ;)
|
In order to be able to use an extension in Explorer, the "bitness" of the extension needs to match the bitness of the operating system. This is because (at least under Windows) you can't load a 32-bit DLL into a 64-bit process -- or vice versa. If there's no 64-bit version of HgTortoise, then you can't use it with Explorer on a 64-bit Windows OS.
|
I've came across this problem once.
I can't really tell you why, but I dont think cron executes with the same environment variable as the user do.
I actually had to export the good path for my programs to execute well.
Is gpg at least trying to execute?
Or are the files you are trying to encypt actually in the current directory when the cron executes?
Maybe try to execute a `echo whereis gpg` and `echo $PATH` in your script to see if it's included... Worked for me.
|
I would use a multidimensional array so that each element in an array is a grid reference to a square on the board.
Thus
board = arrary(A = array (1,2,3,4,5,5,6,7,8),
B = array (12,3,.... etc...
etc...
)
Then *board[A][1]* is then the board square A1.
In reality you would use numbers not letters to help keep your maths logic for where pieces are allowed to move to simple.
|
Do you mean why doesn't the language support multithreading or why don't JavaScript engines in browsers support multithreading?
The answer to the first question is that JavaScript in the browser is meant to be run in a sandbox and in a machine/OS-independent way, to add multithreading support would complicate the language and tie the language too closely to the OS.
|
VS.NET defaults to private class
|
|c#|visual-studio|
|
Why does Visual Studio declare new classes as private in C#? I almost always switch them over to public, am I the crazy one?
|
int[8][8]
0=no piece
1=king
2=queen
3=rook
4=knight
5=bishop
6=pawn
use positive ints for white and negative ints for black
|
int[8][8]
0=no piece
1=king
2=queen
3=rook
4=knight
5=bishop
6=pawn
use positive ints for white and negative ints for black
|
C++, upon which C# is derived, specified that the default class access level is private. C# carries this forward for better or worse.
|
You can pick up a lot from [Pro JavaScript Techniques](http://www.amazon.com/Pro-JavaScript-Techniques-John-Resig/dp/1590597273/ref=pd_bbs_sr_6?ie=UTF8&s=books&qid=1220371851&sr=8-6), and I'm looking forward to Resig's forthcoming [Secrets of the JavaScript Ninja](http://jsninja.com/).
|
No I always have to slap that "public" keyword on the front of the class, so you are not alone. I guess the template designers thought it was a good idea to start with the very basics. You can edit these templates though in your Visual Studio install, if it really annoys you that much, but I haven't gotten to that point yet.
|
Yes, it's unsafe.
There's no easy way of allowing it. You could have a domain whitelist that allowed YouTube, Hulu, etc. through, but whitelisting is inherently a pain in the ass - you'd be constantly updating.
|
Configuring sendmail behind a firewall
|
|configuration|firewall|sendmail|
|
I'm setting up a server which is on a network behind a firewall and I want programs on this computer to be able to use sendmail to send emails to any email address. We have an SMTP server running on this network (let's call it mailrelay.example.com) which is how we're supposed to get outgoing emails through the firewall.
So how do I configure sendmail to send all mail through mailrelay.example.com? Googling hasn't given me the answer yet, and has only revealed that sendmail configuration is extremely complex and annoying.
|
|configuration|firewall|sendmail|smarthost|
|
Propagation of Oracle Transactions Between C++ and Java
|
|java|oracle-database|transactions|jta|jts|
|
We have an existing C++ application that we are going to gradually replace with a new Java-based system. Until we have completely reimplemented everything in Java we expect the C++ and Java to have to communicate with each other (RMI, SOAP, messaging, etc - we haven't decided).
Now my manager thinks we'll need the Java and C++ sides to participate in the same Oracle DB transaction. This is related to, but different from the usual distrbuted transaction problem of having a single process co-ordinate 2 transactional resources, such as a DB and a message queue.
I think propagating a transaction across processes is a terrible idea from a performance and stability point-of-view, but I am still going to be asked for a solution.
I am familiar with XA transactions and I've done some work with the JBoss Transaction Manager, but my googling hasn't turned up anything good on propagating an XA transaction between 2 processes.
We are using Spring on the Java side and their documentation explicitly states they do not provide any help with transaction propagation.
We are not planning on using a traditional JEE server (for example: IBM Websphere), which may have support for propagation (not that I can find any definitive documentation).
Any help or pointers on solutions is greatly appreciated.
|
How are you setting focus? I haven't tried the specific scenario you've suggested, but here's how I set focus to my controls:
Public Sub SetFocus(ByVal ctrl As Control)
Dim sb As New System.Text.StringBuilder
Dim p As Control
p = ctrl.Parent
While (Not (p.GetType() Is GetType(System.Web.UI.HtmlControls.HtmlForm)))
p = p.Parent
End While
With sb
.Append("<script language='JavaScript'>")
.Append("function SetFocus()")
.Append("{")
.Append("document.")
.Append(p.ClientID)
.Append("['")
.Append(ctrl.UniqueID)
.Append("'].focus();")
.Append("}")
.Append("window.onload = SetFocus;")
.Append("")
.Append("</script")
.Append(">")
End With
ctrl.Page.RegisterClientScriptBlock("SetFocus", sb.ToString())
End Sub
So, I'm not sure what method you're using, but if it's different than mine, give that a shot and see if you still have a problem or not.
|
Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache:
<ul>
<li><b>Accessing the Data Base</b> where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with <a href="http://adodb.sourceforge.net/">AdoDB</a> for example.)</li>
<li><b>Extracting calculations from loops in the code</b> so you don't compute the same value multiple times. <i>Here your third way: storing results in the session for the user.</i></li>
<li><b>Precompiling the PHP code</b> with an extension like <a href="http://pecl.php.net/package/APC">APC Cache</a>. This way you don't have to compile the same PHP code for every request.</li>
<li><b>The page sent to the user</b> making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like <a href="http://www.squid-cache.org/">Squid</a>.</li>
<li><b>Prefetching</b> and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. </li>
</ul>
From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like <a href="http://xdebug.org/">XDebug</a>.
Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once"
Use a simple tool like <a href="http://developer.yahoo.com/yslow/">YSlow</a> to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript)
And the next time, consider the CACHE-thing an integral part of your architecture.
|
Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache:
<ul>
<li><b>Accessing the Data Base</b> where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with <a href="http://adodb.sourceforge.net/">AdoDB</a> for example.)</li>
<li><b>Extracting calculations from loops in the code</b> so you don't compute the same value multiple times. <i>Here your third way: storing results in the session for the user.</i></li>
<li><b>Precompiling the PHP code</b> with an extension like <a href="http://pecl.php.net/package/APC">APC Cache</a>. This way you don't have to compile the same PHP code for every request.</li>
<li><b>The page sent to the user</b> making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates .html pages); or by using a proxy cache like <a href="http://www.squid-cache.org/">Squid</a>.</li>
<li><b>Prefetching</b> and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. </li>
</ul>
From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like <a href="http://xdebug.org/">XDebug</a>.
Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once"
Use a simple tool like <a href="http://developer.yahoo.com/yslow/">YSlow</a> to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript)
|
SQL Server - Does column order matter?
|
|sql-server|performance|optimization|
|
In terms of performance and optimizations:
- When constructing a table in SQL Server, does it matter what order I put the columns in?
- Does it matter if my primary key is the first column?
- When constructing a multi-field index, does it matter if the columns are adjacent?
- Using ALTER TABLE syntax, is it possible to specify in what position I want to add a column?
- If not, how can I move a column to a difference position?
|
I would say the answer to all those questions is NO, altough my experience with MS-SQL goes as far as SQL2000. Might be a different story in SQL2005
|
No to the first 3 because the index will hold the data and no the last once also
|
How do you swap DIVs on mouseover? (jquery?)
|
|javascript|jquery|html|
|
|javascript|html|css|jquery|
|
This most be the second most simple rollover effect, still I don't find any simple solution.
**Wanted:** I have a list of items and a correspoding list of slides (DIVs). After loading, the first list item should be selected (bold) and the first slide should be visible. When the user hovers over another list item, that list item should be selected instead and the corresponding slide be shown.
The following code works, but *is awful*. How can I get this behavior in an elegant way? jquery has dozens of animated and complicated rollover effects, but I didn't come up with a clean way for this effect.
<script type="text/javascript">
function switchTo(id) {
document.getElementById('slide1').style.display=(id==1)?'block':'none';
document.getElementById('slide2').style.display=(id==2)?'block':'none';
document.getElementById('slide3').style.display=(id==3)?'block':'none';
document.getElementById('slide4').style.display=(id==4)?'block':'none';
document.getElementById('switch1').style.fontWeight=(id==1)?'bold':'normal';
document.getElementById('switch2').style.fontWeight=(id==2)?'bold':'normal';
document.getElementById('switch3').style.fontWeight=(id==3)?'bold':'normal';
document.getElementById('switch4').style.fontWeight=(id==4)?'bold':'normal';
}
</script>
<ul id="switches">
<li id="switch1" onmouseover="switchTo(1);" style="font-weight:bold;">First slide</li>
<li id="switch2" onmouseover="switchTo(2);">Second slide</li>
<li id="switch3" onmouseover="switchTo(3);">Third slide</li>
<li id="switch4" onmouseover="switchTo(4);">Fourth slide</li>
</ul>
<div id="slides">
<div id="slide1">Well well.</div>
<div id="slide2" style="display:none;">Oh no!</div>
<div id="slide3" style="display:none;">You again?</div>
<div id="slide4" style="display:none;">I'm gone!</div>
</div>
|
GetHashCode() should always be in sync with Equals(), and Equals() isn't necessarily guaranteed to check for <em>all</em> of the fields in your object (there's certain situations where you want that to not be the case).
Furthermore, GetHashCode() isn't guaranteed to return unique values for all possible object states. It's conceivable (though unlikely) that two object states could result in the same HashCode (which does, after all, only have an int's worth of possible states; see <a href="http://en.wikipedia.org/wiki/Pigeonhole_principle">the Pigeonhole Principle</a> for more details).
If you can ensure that Equals() checks all of the appropriate fields, then you could possibly clone the object to record its state and then check it with Equals() against the new state to see if its changed.
BTW: Your mention of serialization gave me an idea. You could serialize the object, record it, and then when you check for object changing, repeat the process and compare the serialized values. That would let you check for state changes without having to make any code changes to your object. However, this isn't a great solution, because:
1. It's probably very inefficient
1. It's prone to serialization changes in the object; you might get false positives on the object state change.
|
You might want to check those few options out, which may be required by a Ruby On Rails environment, in which case they should be compiled. Just make sure the directory corresponds to your current settings.
--with-openssl-dir=/usr --with-readline-dir=/usr --with-zlib-dir=/usr
|
If you're talking about the shortest distance between two real cities on a real spherical planet, like Earth, you want the [great circle distance](http://en.wikipedia.org/wiki/Great-circle_distance).
|
I'm tackling exactly this problem, and I had completely spaced [iCalendar][1] ([rfc 2445][2]) up until reading this thread, so I have no idea how well this will or won't integrate with that. Anyway the design I've come up with so far looks something like this:
- You can't possibly store all the instances of a recurring event, at least not before they occur, so I simply have one table that stores the first instance of the event as an actual date, an optional expiration, and nullable repeat_unit and repeat_increment fields to describe the repetition. For single instances the repition fields are null, otherwise the units will be 'day', 'week', 'month', 'year' and increment is simply the multiple of units to add to start date for the next occurrence.
- Storing past events only seems advantageous if you need to establish relationships with other entities in your model, and even then it's not necessary to have an explicit "event instance" table in every case. If the other entities already have date/time "instance" data then a foreign key to the event (or join table for a many-to-many) would most likely be sufficient.
- To do "change this instance"/"change all future instances", I was planning on just duplicating the events and expiring the stale ones. So to change a single instances, you'd expire the old one at it's last occurrence, make a copy for the new, unique occurrence with the changes and without any repetition, and another copy of the original at the following occurrence that repeats into the future. Changing all future instances is similar, you would just expire the original and make a new copy with the changes and repition details.
The two problems I see with this design so far are:
1. It makes MWF-type events hard to represent. It's possible, but forces the user to create three separate events that repeat weekly on M,W,F individually, and any changes they want to make will have to be done on each one separately as well. These kind of events aren't particularly useful in my app, but it does leave a wart on the model that makes it less universal than I'd like.
2. By copying the events to make changes, you break the association between them, which could be useful in some scenarios (or, maybe it would just be occasionally problematic.) The event table could theoretically contain a "copied_from" id field to track where an event originated, but I haven't fully thought through how useful something like that would be. For one thing, parent/child hierarchical relationships are a pain to query from SQL, so the benefits would need to be pretty heavy to outweigh the cost for querying that data. You could use a [nested-set][3] instead, I suppose.
Lastly I think it's *possible* to compute events for a given timespan using straight SQL, but I haven't worked out the exact details and I think the queries usually end up being too cumbersome to be worthwhile. However for the sake of argument, you can use the following expression to compute the difference in months between the given month and year an event:
(:month + (:year * 12)) - (MONTH(occursOn) + (YEAR(occursOn) * 12))
Building on the last example, you could use MOD to determine whether difference in months is the correct multiple:
MOD(:month + (:year * 12)) - (MONTH(occursOn) + (YEAR(occursOn) * 12), repeatIncrement) = 0
Anyway this isn't perfect (it doesn't ignore expired events, doesn't factor in start / end times for the event, etc), so it's only meant as a motivating example. Generally speaking though I think most queries will end up being too complicated. You're probably better off querying for events that occur during a given range, or don't expire before the range, and computing the instances themselves in code rather than SQL. If you *really* want the database to do the processing then a stored procedure would probably make your life a lot easier.
[1]: http://en.wikipedia.org/wiki/ICalendar
[2]: http://tools.ietf.org/html/rfc2445
[3]: http://dev.mysql.com/tech-resources/articles/hierarchical-data.html
|
Programatically or Manually?
Manually, i prefer [AdExplorer][1], which is a nice Active directory Browser. You just connect to your domain controller and then you can look for the user and see all the details. Of course, you need permissions on the Domain Controller, not sure which though.
Programatically, it depends on your language of couse. On .net, the [System.DirectoryServices][2] Namespace is your friend. (I don't have any code examples here unfortunately)
For Active Directory, I'm not really an expert apart from how to query it, but here are two links I found useful:
[http://www.computerperformance.co.uk/Logon/LDAP_attributes_active_directory.htm][3]
[http://en.wikipedia.org/wiki/Active_Directory][4] (General stuff about the Structure of AD)
[1]: http://technet.microsoft.com/en-us/sysinternals/bb963907.aspx
[2]: http://msdn.microsoft.com/en-us/library/system.directoryservices.aspx
[3]: http://www.computerperformance.co.uk/Logon/LDAP_attributes_active_directory.htm
[4]: http://en.wikipedia.org/wiki/Active_Directory
|
- Loading assemblies from GAC mean less overhead and security that your application will always load correct version of .NET library
- You shouldn't ngen assemblies that are outside of GAC, because there will be almost no performance gain, in many cases even loss in performance.
- You're already using GAC, because all standard .NET assemblies are actually in GAC and ngened (during installation).
- Using GAC for your own libraries adds complexity into deployment, I would try to avoid it at all costs.
|
- Loading assemblies from GAC mean less overhead and security that your application will always load correct version of .NET library
- You shouldn't ngen assemblies that are outside of GAC, because there will be almost no performance gain, in many cases even loss in performance.
- You're already using GAC, because all standard .NET assemblies are actually in GAC and ngened (during installation).
- Using GAC for your own libraries adds complexity into deployment, I would try to avoid it at all costs.
So to sum it all, start simple and if you later see major performance gains if you put your assemblies into GAC and NGEN them, go for it, otherwise don't bother. GAC is more suitable for frameworks where there is expectation for library to be shared among more applications, in 99% of cases, you don't need it.
|
- Loading assemblies from GAC mean less overhead and security that your application will always load correct version of .NET library
- You shouldn't ngen assemblies that are outside of GAC, because there will be almost no performance gain, in many cases even loss in performance.
- You're already using GAC, because all standard .NET assemblies are actually in GAC and ngened (during installation).
- Using GAC for your own libraries adds complexity into deployment, I would try to avoid it at all costs.
- Your users need to be logged as administrators during installation if you want to put something into GAC, quite a problem for many types of applications.
So to sum it all, start simple and if you later see major performance gains if you put your assemblies into GAC and NGEN them, go for it, otherwise don't bother. GAC is more suitable for frameworks where there is expectation for library to be shared among more applications, in 99% of cases, you don't need it.
|
Atom feed documents and Atom entry documents are used as the representation format for RESTful web services that follow the Atom Publication Protocol (AtomPub).
I personally have used syndication feeds to expose a sub-set of the Windows Event Log information so that I could subscribe and be notified of critical events on a server.
|
I'll second the recommendation for [Capistrano][1], though if you're looking for a GUI-based solution you could try the [Webistrano][2] front end. Clean, ssh-based, sane deployment and rollback semantics and easy scripting and extensibility via ruby.
[1]:http://www.capify.org
[2]:http://blog.innerewut.de/webistrano/
|
I tend to have less Exception types, although it's not really the OO way to do it. Instead I put an enum to my custom Exceptions, which classifies the Exception. Most of the time I have a custom base Exception, which holds on to a couple of members, which can be overridden or customized in derived Exception types.
A couple of months ago I [blogged][1] about the idea of how to internationalize Exceptions. It includes some of the ideas mentioned above.
[1]: http://dlinsin.blogspot.com/2008/04/how-to-internationalize-exceptions-ii.html
|
On a freelance job I did, we set up three seperate enviroments.
* A Dev server, that ran continues builds using CruiseControl. Any check-in would trigger a build. QA Testing was done here.
* A Test Server, that user acceptance testing was done on.
* Production.
The workflow was as followed:
1. Developer checks in changes to SourceControl.
2. CruiseControl builds and deploys the build to Dev.
3. Dev is QA'ed
4. After passing QA, a robocopy script is ran that deploys the Dev build to Test.
5. Test is UAT'ed
6. After Test passes, a robocopy script is ran that deploys Test to PRD.
|
For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive.
For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users.
I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters.
Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They're just fishing and they need to actually spend as much time on THINKING about THERE requirements as YOU would have to spend BUILDING their solution.
I've spend too much time building the "the system that can report everything" and for it to become out of date or out of favour before you finished it. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports.
|
For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive.
For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users.
I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters.
Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They are just fishing and they need to actually spend as much time on THINKING about THEIR reporting requirements as YOU would have to spend BUILDING their solution.
I've spent too much time building the "the system that can report everything" and for it to become out of date or out of favour before it was finished. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports.
|
What to use for Messaging with C#
|
|c#|messaging|
|
So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at [ActiveMQ][1] but don't know how well C# will hook with it. Wanting to hear what all of you guys think.
[1]: http://activemq.apache.org/
|
So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at [ActiveMQ][1] but don't know how well C# will hook with it. Wanting to hear what all of you guys think.
[1]: http://activemq.apache.org/
edit : It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it.
|
|c#|messaging|activemq|
|
If you need to do FTP from within the database, then I would go with a .NET assembly as Kevin suggested. That would provide the most control over the process, plus you would be able to log meaningful error messages to a table for reporting.
Another option would be to write a command line app that read the database for commands to run. You could then define a scheduled task to call that command line app every minutes or whatever the polling period needed to be. That would be more secure than enabling CLR support in the database server.
|
Sorry if this isn't what you are asking for...
Have you considered some sort of cache behind the scenes that acts a bit like the "bucket system" when using asynchronous sockets in c/c++ using winsock? Basicly, it works by accepting requests, and sends an immediate response back to the web app, and when it finally gets around to finding your record, it updates it on the app via AJAX or any other technology of your choice. Since I'm not a C# programmer I can't provide any specific example. Hope this helps!
|
@Greg: The Value property concatenates all the text contents of any child nodes. So if the body element contains only text it works, but if it contains XHTML I get all the text concatenated together but none of the tags.
|
@Greg: It appears you've edited your answer to be a completely different answer. To which my answer is yes, I could do this using System.Xml but was hoping to get my feet wet with LINQ to XML.
I'll leave my original reply below in case anyone else wonders why I can't just use the XElement's .Value property to get what I need:
@Greg: The Value property concatenates all the text contents of any child nodes. So if the body element contains only text it works, but if it contains XHTML I get all the text concatenated together but none of the tags.
|
ncover with nunit2 task in NAnt
|
|nant|nunit|task|ncover|
|
Is there any chance to get this work? I want my tests to be run by nunit2 task in NAnt. In addition I want to run NCover without running tests again.
|
have you had a look at [InputEx][1]
[1]: http://javascript.neyric.com/inputex/
|
In .Net 3.0+ you can replace your ItemDataBound to the asp:Literal by doing something like this:
<ItemTemplate>
<tr>
<td><%# Eval("published") %></td>
...
where "published" is the name of a field in the data you have bound to the repeater
|
In .Net 3.0+ you can replace your ItemDataBound to the asp:Literal by doing something like this:
<ItemTemplate>
<tr>
<td><%# Eval("published") %></td>
...
where "published" is the name of a field in the data you have bound to the repeater
Edit:
@[Alassek][1]: I think the performance hit of reflection is often over-emphasized. Obviously you need to benchmark performance of your app, but the hit of the Eval is likely measured in milliseconds. Unless your app is serving many concurrent hits, this probably isn't an issue, and the simplicity of the code using Eval, along with it being a good separation of the presentation, make it a good solution.
[1]: http://stackoverflow.com/questions/43803/how-do-i-best-populate-an-html-table-in-aspnet#43894
|
I know it doesn't answer the question, but I have always written my own, or rather written it once and tweaked it for other apps. When I store the questions in the DB I store what input type it is, then on the form I dynamically create the appropriate control depending on which input type the question needs and add that control to a dynamically created table cell as I go.
If you choose to do that just remember when processing the form that the controls don't exist on postback, you need to recreate them.
It is not too bad to write it, if you have the time. My current form module is running a few dozen forms from the one module.
|
Views in seperate assemblies in ASP.NET MVC
|
|c#|mvc|plugins|
|
I'm trying to create a webapplication where I want to be able to plug-in seperate assemblies. I'm using MVC preview 4 combined with Unity for dependency injection, which I use to create the controllers from my plugin assemblies. I'm using WebForms (default aspx) as my view engine.
If I want to use a view, I'm stuck on the ones that are defined in the core project, because of the dynamic compiling of the ASPX part. I'm looking for a proper way to enclose ASPX files in a different assembly, without having to go through the whole deployment step. Am I missing something obvious? Or should I resort to creating my views programmatically?
|
|c#|mvc|plugins|asp.net-mvc|
|
|c#|asp.net|asp.net-mvc|mvc|plugins|
|
I'm trying to create a webapplication where I want to be able to plug-in seperate assemblies. I'm using MVC preview 4 combined with Unity for dependency injection, which I use to create the controllers from my plugin assemblies. I'm using WebForms (default aspx) as my view engine.
If I want to use a view, I'm stuck on the ones that are defined in the core project, because of the dynamic compiling of the ASPX part. I'm looking for a proper way to enclose ASPX files in a different assembly, without having to go through the whole deployment step. Am I missing something obvious? Or should I resort to creating my views programmatically?
----------
Update: I changed the accepted answer. Eventhough Dale's answer is very thorough, I went for the solution with a different virtual path provider. It works like a charm, and takes only about 20 lines in code alltogether I think.
|
You should really talk them into allowing SQLite. It is super-simple to setup, and operates like Access would (as a file sitting next to the app on the same server).
|
Both mechanisms have value. Any decent test framework will catch the standard assert() anyway, so a test run that causes the assert to fail will result in a failed test.
I typically have a series of asserts at the start of each c++ method with a comment '// preconditions'; it's just a sanity check on the state I expect the object to have when the method is called. These dovetail nicely into any TDD framework because they not only work at runtime when you're testing functionality but they also work at test time.
|
My answer is let your developers choose the tools they are best with.
|
Thanks adeel825 & Michael Stum.
My problem is, though, i'm in a big corporation and do not have access to log in as the domain admin nor to view the active directory, so i guess my solution is to try and get that level of access.
Thanks guys!
|
I know at least [yahoo!'s Rich Text Editor][1] will let you use the included spell checker in FireFox.
I also tested FCKeditor, but that requires the users to install additional plugins on his computer.
[1]: http://developer.yahoo.com/yui/editor/
|
I know at least [yahoo!'s Rich Text Editor][1] will let you use the included spell checker in FireFox.
I also tested FCKeditor, but that requires the users to install additional plugins on their computer.
[1]: http://developer.yahoo.com/yui/editor/
|
You need to go to the Active Directory Users Snap In after logging in as a domain admin on the machine:
1. Go to start --> run and type in mmc.
2. In the MMC console go to File -->
3. Add/Remove Snap-In Click Add Select
4. Active Directory Users and Computers and select Add.
5. Hit Close and then hit OK.
From here you can expand the domain tree and search (by right-clicking on the domain name).
When you search for someone, you can select the columns from View --> Choose Columns. This should help you search for the person or group you are looking for.
|
You need to go to the Active Directory Users Snap In after logging in as a domain admin on the machine:
1. Go to start --> run and type in mmc.
2. In the MMC console go to File -->
3. Add/Remove Snap-In Click Add Select
4. Active Directory Users and Computers and select Add.
5. Hit Close and then hit OK.
From here you can expand the domain tree and search (by right-clicking on the domain name).
You may not need special privileges to view the contents of the Active Directory domain, especially if you are logged in on that domain. It is worth a shot to see how far you can get.
When you search for someone, you can select the columns from View --> Choose Columns. This should help you search for the person or group you are looking for.
|