text
stringlengths
8
267k
meta
dict
Q: How to convert Decimal to Double in C#? I want to assign the decimal variable "trans" to the double variable "this.Opacity". decimal trans = trackBar1.Value / 5000; this.Opacity = trans; When I build the app it gives the following error: Cannot implicitly convert type decimal to double A: Why are you dividing by 5000? Just set the TrackBar's Minimum and Maximum values between 0 and 100 and then divide the Value by 100 for the Opacity percentage. The minimum 20 example below prevents the form from becoming completely invisible: private void Form1_Load(object sender, System.EventArgs e) { TrackBar1.Minimum = 20; TrackBar1.Maximum = 100; TrackBar1.LargeChange = 10; TrackBar1.SmallChange = 1; TrackBar1.TickFrequency = 5; } private void TrackBar1_Scroll(object sender, System.EventArgs e) { this.Opacity = TrackBar1.Value / 100; } A: You have two problems. Firstly, Opacity requires a double, not a decimal value. The compiler is telling you that while there is a conversion between decimal and double, it is an explicit conversion that you need to specify in order for it to work. Secondly, TrackBar.Value is an integer value and dividing an int by an int results in an int no matter what type of variable you assign it to. In this case there is an implicit cast from int to decimal or double, because there is no loss of precision when you do the cast. So the compiler doesn't complain. But the value you get is always 0, presumably, since trackBar.Value is always less than 5000. The solution is to change your code to use double (the native type for Opacity) and do floating point arithmetic by explicitly making the constant a double, which will have the effect of promoting the arithmetic or casting trackBar.Value to double, which will do the same thing or both. You don't need the intermediate variable unless it is used elsewhere. My guess is the compiler would optimize it away anyway. trackBar.Opacity = (double)trackBar.Value / 5000.0; A: In my opinion, it is desirable to be as explicit as possible. This adds clarity to the code and aids your fellow programmers who may eventually read it. In addition to (or instead of) appending a .0 to the number, you can use decimal.ToDouble(). Here are some examples: // Example 1 double transparency = trackBar1.Value/5000; this.Opacity = decimal.ToDouble(transparency); // Example 2 - with inline temp this.Opacity = decimal.ToDouble(trackBar1.Value/5000); A: It sounds like this.Opacity is a double value, and the compiler doesn't like you trying to cram a decimal value into it. A: You should use 5000.0 instead of 5000. A: The Opacity property is of double type: double trans = trackBar1.Value / 5000.0; this.Opacity = trans; or simply: this.Opacity = trackBar1.Value / 5000.0; or: this.Opacity = trackBar1.Value / 5000d; Notice that I am using 5000.0 (or 5000d) to force a double division because trackBar1.Value is an integer and it would perform an integer division and the result would be an integer. A: Assuming you are using WinForms, Form.Opacity is of type double, so you should use: double trans = trackBar1.Value / 5000.0; this.Opacity = trans; Unless you need the value elsewhere, it's simpler to write: this.Opacity = trackBar1.Value / 5000.0; The reason the control doesn't work when you changed your code to simply be a double was because you had: double trans = trackbar1.Value / 5000; which interpreted the 5000 as an integer, and because trackbar1.Value is also an integer your trans value was always zero. By explicitly making the numeric a floating point value by adding the .0 the compiler can now interpret it as a double and perform the proper calculation. A: Since Opacity is a double value, I would just use a double from the outset and not cast at all, but be sure to use a double when dividing so you don't lose any precision: Opacity = trackBar1.Value / 5000.0; A: An explicit cast to double like this isn't necessary: double trans = (double) trackBar1.Value / 5000.0; Identifying the constant as 5000.0 (or as 5000d) is sufficient: double trans = trackBar1.Value / 5000.0; double trans = trackBar1.Value / 5000d; A: The best solution is: this.Opacity = decimal.ToDouble(trackBar1.Value/5000); A: OG Fact: Double type represents a wider range of possible values than Decimal. Casting as Double decimal trans = trackBar1.Value / 5000m; this.Opacity = (double) trans; Type Conversion decimal trans = trackBar1.Value / 5000m; this.Opacity = decimal.ToDouble(trans); No explicit Cast/Conversion In this case, adding 'd' in the end of the constant 5000d or ".0" 5000.0 will identify the required Type. When there is no constant in the operation, just multiply your decimal variable by 1.0 or 1d. A: Try the following code: Decimal values decimal d1 = 3234.3434m; Convert to double double r1 = Decimal.ToDouble(d1); A: A more generic answer for the generic question "Decimal vs Double?": Decimal is for monetary calculations to preserve precision. Double is for scientific calculations that do not get affected by small differences. Since Double is a type that is native to the CPU (internal representation is stored in base 2), calculations made with Double perform better than Decimal (which is represented in base 10 internally). A: Your code worked fine in VB.NET because it implicitly does any casts, while C# has both implicit and explicit ones. In C# the conversion from decimal to double is explicit as you lose accuracy. For instance 1.1 can't be accurately expressed as a double, but can as a decimal (see "Floating point numbers - more inaccurate than you think" for the reason why). In VB the conversion was added for you by the compiler: decimal trans = trackBar1.Value / 5000m; this.Opacity = (double) trans; That (double) has to be explicitly stated in C#, but can be implied by VB's more 'forgiving' compiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/4", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "794" }
Q: Why did the width collapse in the percentage width child element in an absolutely positioned parent on Internet Explorer 7? I have an absolutely positioned div containing several children, one of which is a relatively positioned div. When I use a percentage-based width on the child div, it collapses to 0 width on IE7, but not on Firefox or Safari. If I use pixel width, it works. If the parent is relatively positioned, the percentage width on the child works. * *Is there something I'm missing here? *Is there an easy fix for this besides the pixel-based width on the child? *Is there an area of the CSS specification that covers this? A: The div needs to have a defined width: <div id="parent" style="width:230px;"> <div id="child1"></div> <div id="child2"></div> </div> A: Here is a sample code. I think this is what you are looking for. The following code displays exactly the same in Firefox 3 (mac) and IE7. #absdiv { position: absolute; left: 100px; top: 100px; width: 80%; height: 60%; background: #999; } #pctchild { width: 60%; height: 40%; background: #CCC; } #reldiv { position: relative; left: 20px; top: 20px; height: 25px; width: 40%; background: red; } <div id="absdiv"> <div id="reldiv"></div> <div id="pctchild"></div> </div> A: The parent <div> tag doesn't have any width specified. Use it for the child <div> tags, it could be percentage or pixel, but whatever it would be,it should link to its appropriate positions: <div id="MainDiv" style="width:60%;"> <div id="Div1"> ... </div> <div id="Div2"> ... </div> ... </div> A: IE prior to 8 has a temporal aspect to its box model that most notably creates a problem with percentage-based widths. In your case here an absolutely positioned div by default has no width. Its width will be worked out based on the pixel width of its content and will be calculated after the contents are rendered. So at the point, IE encounters and renders your relatively positioned div its parent has a width of 0 hence why it itself collapses to 0. If you would like a more in-depth discussion of this along with lots of working examples, have a gander here. A: Why doesn’t the percentage width child in absolutely positioned parent work in IE7? Because it's Internet Explorer. Is there something I'm missing here? That is, to raise your co-workers' / clients' awareness that IE is not good. Is there an easy fix besides the pixel-based width on the child? Use em units as they are more useful when creating liquid layouts as you can use them for padding and margins as well as font sizes. So your white space grows and shrinks proportionally to your text if it is resized (which is really what you need). I don't think percentages give a finer control than ems; there's nothing to stop you specifying in hundredths of ems (0.01 em) and the browser will interpret as it sees fit. Is there an area of the CSS specification that covers this? None, as far as I remember em's and %'s were intended for font sizes alone back at CSS 1.0. A: I think this has something to do with the way the hasLayout property is implemented in the older browser. Have you tried your code in IE8 to see if works in there, too? IE8 has a Debugger (F12) and can also run in IE7 mode. A: The parent div needs to have a defined width, either in pixels or as a percentage. In Internet Explorer 7, the parent div needs a defined width for child percentage divs to work correctly.
{ "language": "en", "url": "https://stackoverflow.com/questions/6", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "319" }
Q: How do I calculate someone's age based on a DateTime type birthday? Given a DateTime representing a person's birthday, how do I calculate their age in years? A: My suggestion int age = (int) ((DateTime.Now - bday).TotalDays/365.242199); That seems to have the year changing on the right date. (I spot tested up to age 107.) A: Another function, not by me but found on the web and refined it a bit: public static int GetAge(DateTime birthDate) { DateTime n = DateTime.Now; // To avoid a race condition around midnight int age = n.Year - birthDate.Year; if (n.Month < birthDate.Month || (n.Month == birthDate.Month && n.Day < birthDate.Day)) age--; return age; } Just two things that come into my mind: What about people from countries that do not use the Gregorian calendar? DateTime.Now is in the server-specific culture I think. I have absolutely zero knowledge about actually working with Asian calendars and I do not know if there is an easy way to convert dates between calendars, but just in case you're wondering about those Chinese guys from the year 4660 :-) A: I used ScArcher2's solution for an accurate Year calculation of a persons age but I needed to take it further and calculate their Months and Days along with the Years. public static Dictionary<string,int> CurrentAgeInYearsMonthsDays(DateTime? ndtBirthDate, DateTime? ndtReferralDate) { //---------------------------------------------------------------------- // Can't determine age if we don't have a dates. //---------------------------------------------------------------------- if (ndtBirthDate == null) return null; if (ndtReferralDate == null) return null; DateTime dtBirthDate = Convert.ToDateTime(ndtBirthDate); DateTime dtReferralDate = Convert.ToDateTime(ndtReferralDate); //---------------------------------------------------------------------- // Create our Variables //---------------------------------------------------------------------- Dictionary<string, int> dYMD = new Dictionary<string,int>(); int iNowDate, iBirthDate, iYears, iMonths, iDays; string sDif = ""; //---------------------------------------------------------------------- // Store off current date/time and DOB into local variables //---------------------------------------------------------------------- iNowDate = int.Parse(dtReferralDate.ToString("yyyyMMdd")); iBirthDate = int.Parse(dtBirthDate.ToString("yyyyMMdd")); //---------------------------------------------------------------------- // Calculate Years //---------------------------------------------------------------------- sDif = (iNowDate - iBirthDate).ToString(); iYears = int.Parse(sDif.Substring(0, sDif.Length - 4)); //---------------------------------------------------------------------- // Store Years in Return Value //---------------------------------------------------------------------- dYMD.Add("Years", iYears); //---------------------------------------------------------------------- // Calculate Months //---------------------------------------------------------------------- if (dtBirthDate.Month > dtReferralDate.Month) iMonths = 12 - dtBirthDate.Month + dtReferralDate.Month - 1; else iMonths = dtBirthDate.Month - dtReferralDate.Month; //---------------------------------------------------------------------- // Store Months in Return Value //---------------------------------------------------------------------- dYMD.Add("Months", iMonths); //---------------------------------------------------------------------- // Calculate Remaining Days //---------------------------------------------------------------------- if (dtBirthDate.Day > dtReferralDate.Day) //Logic: Figure out the days in month previous to the current month, or the admitted month. // Subtract the birthday from the total days which will give us how many days the person has lived since their birthdate day the previous month. // then take the referral date and simply add the number of days the person has lived this month. //If referral date is january, we need to go back to the following year's December to get the days in that month. if (dtReferralDate.Month == 1) iDays = DateTime.DaysInMonth(dtReferralDate.Year - 1, 12) - dtBirthDate.Day + dtReferralDate.Day; else iDays = DateTime.DaysInMonth(dtReferralDate.Year, dtReferralDate.Month - 1) - dtBirthDate.Day + dtReferralDate.Day; else iDays = dtReferralDate.Day - dtBirthDate.Day; //---------------------------------------------------------------------- // Store Days in Return Value //---------------------------------------------------------------------- dYMD.Add("Days", iDays); return dYMD; } A: This is simple and appears to be accurate for my needs. I am making an assumption for the purpose of leap years that regardless of when the person chooses to celebrate the birthday they are not technically a year older until 365 days have passed since their last birthday (i.e 28th February does not make them a year older). DateTime now = DateTime.Today; DateTime birthday = new DateTime(1991, 02, 03);//3rd feb int age = now.Year - birthday.Year; if (now.Month < birthday.Month || (now.Month == birthday.Month && now.Day < birthday.Day))//not had bday this year yet age--; return age; A: I've made one small change to Mark Soen's answer: I've rewriten the third line so that the expression can be parsed a bit more easily. public int AgeInYears(DateTime bday) { DateTime now = DateTime.Today; int age = now.Year - bday.Year; if (bday.AddYears(age) > now) age--; return age; } I've also made it into a function for the sake of clarity. A: === Common Saying (from months to years old) === If you just for common use, here is the code as your information: DateTime today = DateTime.Today; DateTime bday = DateTime.Parse("2016-2-14"); int age = today.Year - bday.Year; var unit = ""; if (bday > today.AddYears(-age)) { age--; } if (age == 0) // Under one year old { age = today.Month - bday.Month; age = age <= 0 ? (12 + age) : age; // The next year before birthday age = today.Day - bday.Day >= 0 ? age : --age; // Before the birthday.day unit = "month"; } else { unit = "year"; } if (age > 1) { unit = unit + "s"; } The test result as below: The birthday: 2016-2-14 2016-2-15 => age=0, unit=month; 2016-5-13 => age=2, unit=months; 2016-5-14 => age=3, unit=months; 2016-6-13 => age=3, unit=months; 2016-6-15 => age=4, unit=months; 2017-1-13 => age=10, unit=months; 2017-1-14 => age=11, unit=months; 2017-2-13 => age=11, unit=months; 2017-2-14 => age=1, unit=year; 2017-2-15 => age=1, unit=year; 2017-3-13 => age=1, unit=year; 2018-1-13 => age=1, unit=year; 2018-1-14 => age=1, unit=year; 2018-2-13 => age=1, unit=year; 2018-2-14 => age=2, unit=years; A: private int GetYearDiff(DateTime start, DateTime end) { int diff = end.Year - start.Year; if (end.DayOfYear < start.DayOfYear) { diff -= 1; } return diff; } [Fact] public void GetYearDiff_WhenCalls_ShouldReturnCorrectYearDiff() { //arrange var now = DateTime.Now; //act //assert Assert.Equal(24, GetYearDiff(new DateTime(1992, 7, 9), now)); // passed Assert.Equal(24, GetYearDiff(new DateTime(1992, now.Month, now.Day), now)); // passed Assert.Equal(23, GetYearDiff(new DateTime(1992, 12, 9), now)); // passed } A: 2 Main problems to solve are: 1. Calculate Exact age - in years, months, days, etc. 2. Calculate Generally perceived age - people usually do not care how old they exactly are, they just care when their birthday in the current year is. Solution for 1 is obvious: DateTime birth = DateTime.Parse("1.1.2000"); DateTime today = DateTime.Today; //we usually don't care about birth time TimeSpan age = today - birth; //.NET FCL should guarantee this as precise double ageInDays = age.TotalDays; //total number of days ... also precise double daysInYear = 365.2425; //statistical value for 400 years double ageInYears = ageInDays / daysInYear; //can be shifted ... not so precise Solution for 2 is the one which is not so precise in determing total age, but is perceived as precise by people. People also usually use it, when they calculate their age "manually": DateTime birth = DateTime.Parse("1.1.2000"); DateTime today = DateTime.Today; int age = today.Year - birth.Year; //people perceive their age in years if (today.Month < birth.Month || ((today.Month == birth.Month) && (today.Day < birth.Day))) { age--; //birthday in current year not yet reached, we are 1 year younger ;) //+ no birthday for 29.2. guys ... sorry, just wrong date for birth } Notes to 2.: * *This is my preferred solution *We cannot use DateTime.DayOfYear or TimeSpans, as they shift number of days in leap years *I have put there little more lines for readability Just one more note ... I would create 2 static overloaded methods for it, one for universal usage, second for usage-friendliness: public static int GetAge(DateTime bithDay, DateTime today) { //chosen solution method body } public static int GetAge(DateTime birthDay) { return GetAge(birthDay, DateTime.Now); } A: Wow, I had to give my answer here... There are so many answers for such a simple question. private int CalcularIdade(DateTime dtNascimento) { var nHoje = Convert.ToInt32(DateTime.Today.ToString("yyyyMMdd")); var nAniversario = Convert.ToInt32(dtNascimento.ToString("yyyyMMdd")); double diff = (nHoje - nAniversario) / 10000; var ret = Convert.ToInt32(Math.Truncate(diff)); return ret; } A: The best way that I know of because of leap years and everything is: DateTime birthDate = new DateTime(2000,3,1); int age = (int)Math.Floor((DateTime.Now - birthDate).TotalDays / 365.25D); A: Here's a one-liner: int age = new DateTime(DateTime.Now.Subtract(birthday).Ticks).Year-1; A: It can be this simple: int age = DateTime.Now.AddTicks(0 - dob.Ticks).Year - 1; A: This is the easiest way to answer this in a single line. DateTime Dob = DateTime.Parse("1985-04-24"); int Age = DateTime.MinValue.AddDays(DateTime.Now.Subtract(Dob).TotalHours/24 - 1).Year - 1; This also works for leap years. A: This is the version we use here. It works, and it's fairly simple. It's the same idea as Jeff's but I think it's a little clearer because it separates out the logic for subtracting one, so it's a little easier to understand. public static int GetAge(this DateTime dateOfBirth, DateTime dateAsAt) { return dateAsAt.Year - dateOfBirth.Year - (dateOfBirth.DayOfYear < dateAsAt.DayOfYear ? 0 : 1); } You could expand the ternary operator to make it even clearer, if you think that sort of thing is unclear. Obviously this is done as an extension method on DateTime, but clearly you can grab that one line of code that does the work and put it anywhere. Here we have another overload of the Extension method that passes in DateTime.Now, just for completeness. A: Here is a test snippet: DateTime bDay = new DateTime(2000, 2, 29); DateTime now = new DateTime(2009, 2, 28); MessageBox.Show(string.Format("Test {0} {1} {2}", CalculateAgeWrong1(bDay, now), // outputs 9 CalculateAgeWrong2(bDay, now), // outputs 9 CalculateAgeCorrect(bDay, now), // outputs 8 CalculateAgeCorrect2(bDay, now))); // outputs 8 Here you have the methods: public int CalculateAgeWrong1(DateTime birthDate, DateTime now) { return new DateTime(now.Subtract(birthDate).Ticks).Year - 1; } public int CalculateAgeWrong2(DateTime birthDate, DateTime now) { int age = now.Year - birthDate.Year; if (now < birthDate.AddYears(age)) age--; return age; } public int CalculateAgeCorrect(DateTime birthDate, DateTime now) { int age = now.Year - birthDate.Year; if (now.Month < birthDate.Month || (now.Month == birthDate.Month && now.Day < birthDate.Day)) age--; return age; } public int CalculateAgeCorrect2(DateTime birthDate, DateTime now) { int age = now.Year - birthDate.Year; // For leap years we need this if (birthDate > now.AddYears(-age)) age--; // Don't use: // if (birthDate.AddYears(age) > now) // age--; return age; } A: This may work: public override bool IsValid(DateTime value) { _dateOfBirth = value; var yearsOld = (double) (DateTime.Now.Subtract(_dateOfBirth).TotalDays/365); if (yearsOld > 18) return true; return false; } A: Here's a little code sample for C# I knocked up, be careful around the edge cases specifically leap years, not all the above solutions take them into account. Pushing the answer out as a DateTime can cause problems as you could end up trying to put too many days into a specific month e.g. 30 days in Feb. public string LoopAge(DateTime myDOB, DateTime FutureDate) { int years = 0; int months = 0; int days = 0; DateTime tmpMyDOB = new DateTime(myDOB.Year, myDOB.Month, 1); DateTime tmpFutureDate = new DateTime(FutureDate.Year, FutureDate.Month, 1); while (tmpMyDOB.AddYears(years).AddMonths(months) < tmpFutureDate) { months++; if (months > 12) { years++; months = months - 12; } } if (FutureDate.Day >= myDOB.Day) { days = days + FutureDate.Day - myDOB.Day; } else { months--; if (months < 0) { years--; months = months + 12; } days = days + (DateTime.DaysInMonth(FutureDate.AddMonths(-1).Year, FutureDate.AddMonths(-1).Month) + FutureDate.Day) - myDOB.Day; } //add an extra day if the dob is a leap day if (DateTime.IsLeapYear(myDOB.Year) && myDOB.Month == 2 && myDOB.Day == 29) { //but only if the future date is less than 1st March if(FutureDate >= new DateTime(FutureDate.Year, 3,1)) days++; } return "Years: " + years + " Months: " + months + " Days: " + days; } A: Here's a DateTime extender that adds the age calculation to the DateTime object. public static class AgeExtender { public static int GetAge(this DateTime dt) { int d = int.Parse(dt.ToString("yyyyMMdd")); int t = int.Parse(DateTime.Today.ToString("yyyyMMdd")); return (t-d)/10000; } } A: public string GetAge(this DateTime birthdate, string ageStrinFormat = null) { var date = DateTime.Now.AddMonths(-birthdate.Month).AddDays(-birthdate.Day); return string.Format(ageStrinFormat ?? "{0}/{1}/{2}", (date.Year - birthdate.Year), date.Month, date.Day); } A: This gives "more detail" to this question. Maybe this is what you're looking for DateTime birth = new DateTime(1974, 8, 29); DateTime today = DateTime.Now; TimeSpan span = today - birth; DateTime age = DateTime.MinValue + span; // Make adjustment due to MinValue equalling 1/1/1 int years = age.Year - 1; int months = age.Month - 1; int days = age.Day - 1; // Print out not only how many years old they are but give months and days as well Console.Write("{0} years, {1} months, {2} days", years, months, days); A: I use this: public static class DateTimeExtensions { public static int Age(this DateTime birthDate) { return Age(birthDate, DateTime.Now); } public static int Age(this DateTime birthDate, DateTime offsetDate) { int result=0; result = offsetDate.Year - birthDate.Year; if (offsetDate.DayOfYear < birthDate.DayOfYear) { result--; } return result; } } A: Here's yet another answer: public static int AgeInYears(DateTime birthday, DateTime today) { return ((today.Year - birthday.Year) * 372 + (today.Month - birthday.Month) * 31 + (today.Day - birthday.Day)) / 372; } This has been extensively unit-tested. It does look a bit "magic". The number 372 is the number of days there would be in a year if every month had 31 days. The explanation of why it works (lifted from here) is: Let's set Yn = DateTime.Now.Year, Yb = birthday.Year, Mn = DateTime.Now.Month, Mb = birthday.Month, Dn = DateTime.Now.Day, Db = birthday.Day age = Yn - Yb + (31*(Mn - Mb) + (Dn - Db)) / 372 We know that what we need is either Yn-Yb if the date has already been reached, Yn-Yb-1 if it has not. a) If Mn<Mb, we have -341 <= 31*(Mn-Mb) <= -31 and -30 <= Dn-Db <= 30 -371 <= 31*(Mn - Mb) + (Dn - Db) <= -1 With integer division (31*(Mn - Mb) + (Dn - Db)) / 372 = -1 b) If Mn=Mb and Dn<Db, we have 31*(Mn - Mb) = 0 and -30 <= Dn-Db <= -1 With integer division, again (31*(Mn - Mb) + (Dn - Db)) / 372 = -1 c) If Mn>Mb, we have 31 <= 31*(Mn-Mb) <= 341 and -30 <= Dn-Db <= 30 1 <= 31*(Mn - Mb) + (Dn - Db) <= 371 With integer division (31*(Mn - Mb) + (Dn - Db)) / 372 = 0 d) If Mn=Mb and Dn>Db, we have 31*(Mn - Mb) = 0 and 1 <= Dn-Db <= 30 With integer division, again (31*(Mn - Mb) + (Dn - Db)) / 372 = 0 e) If Mn=Mb and Dn=Db, we have 31*(Mn - Mb) + Dn-Db = 0 and therefore (31*(Mn - Mb) + (Dn - Db)) / 372 = 0 A: I have created a SQL Server User Defined Function to calculate someone's age, given their birthdate. This is useful when you need it as part of a query: using System; using System.Data; using System.Data.Sql; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class UserDefinedFunctions { [SqlFunction(DataAccess = DataAccessKind.Read)] public static SqlInt32 CalculateAge(string strBirthDate) { DateTime dtBirthDate = new DateTime(); dtBirthDate = Convert.ToDateTime(strBirthDate); DateTime dtToday = DateTime.Now; // get the difference in years int years = dtToday.Year - dtBirthDate.Year; // subtract another year if we're before the // birth day in the current year if (dtToday.Month < dtBirthDate.Month || (dtToday.Month == dtBirthDate.Month && dtToday.Day < dtBirthDate.Day)) years=years-1; int intCustomerAge = years; return intCustomerAge; } }; A: I think the TimeSpan has all that we need in it, without having to resort to 365.25 (or any other approximation). Expanding on Aug's example: DateTime myBD = new DateTime(1980, 10, 10); TimeSpan difference = DateTime.Now.Subtract(myBD); textBox1.Text = difference.Years + " years " + difference.Months + " Months " + difference.Days + " days"; A: I want to add Hebrew calendar calculations (or other System.Globalization calendar can be used in the same way), using rewrited functions from this thread: Public Shared Function CalculateAge(BirthDate As DateTime) As Integer Dim HebCal As New System.Globalization.HebrewCalendar () Dim now = DateTime.Now() Dim iAge = HebCal.GetYear(now) - HebCal.GetYear(BirthDate) Dim iNowMonth = HebCal.GetMonth(now), iBirthMonth = HebCal.GetMonth(BirthDate) If iNowMonth < iBirthMonth Or (iNowMonth = iBirthMonth AndAlso HebCal.GetDayOfMonth(now) < HebCal.GetDayOfMonth(BirthDate)) Then iAge -= 1 Return iAge End Function A: Try this solution, it's working. int age = (Int32.Parse(DateTime.Today.ToString("yyyyMMdd")) - Int32.Parse(birthday.ToString("yyyyMMdd rawrrr"))) / 10000; A: Here is a function that is serving me well. No calculations , very simple. public static string ToAge(this DateTime dob, DateTime? toDate = null) { if (!toDate.HasValue) toDate = DateTime.Now; var now = toDate.Value; if (now.CompareTo(dob) < 0) return "Future date"; int years = now.Year - dob.Year; int months = now.Month - dob.Month; int days = now.Day - dob.Day; if (days < 0) { months--; days = DateTime.DaysInMonth(dob.Year, dob.Month) - dob.Day + now.Day; } if (months < 0) { years--; months = 12 + months; } return string.Format("{0} year(s), {1} month(s), {2} days(s)", years, months, days); } And here is a unit test: [Test] public void ToAgeTests() { var date = new DateTime(2000, 1, 1); Assert.AreEqual("0 year(s), 0 month(s), 1 days(s)", new DateTime(1999, 12, 31).ToAge(date)); Assert.AreEqual("0 year(s), 0 month(s), 0 days(s)", new DateTime(2000, 1, 1).ToAge(date)); Assert.AreEqual("1 year(s), 0 month(s), 0 days(s)", new DateTime(1999, 1, 1).ToAge(date)); Assert.AreEqual("0 year(s), 11 month(s), 0 days(s)", new DateTime(1999, 2, 1).ToAge(date)); Assert.AreEqual("0 year(s), 10 month(s), 25 days(s)", new DateTime(1999, 2, 4).ToAge(date)); Assert.AreEqual("0 year(s), 10 month(s), 1 days(s)", new DateTime(1999, 2, 28).ToAge(date)); date = new DateTime(2000, 2, 15); Assert.AreEqual("0 year(s), 0 month(s), 28 days(s)", new DateTime(2000, 1, 18).ToAge(date)); } A: I have used the following for this issue. I know it's not very elegant, but it's working. DateTime zeroTime = new DateTime(1, 1, 1); var date1 = new DateTime(1983, 03, 04); var date2 = DateTime.Now; var dif = date2 - date1; int years = (zeroTime + dif).Year - 1; Log.DebugFormat("Years -->{0}", years); A: I often count on my fingers. I need to look at a calendar to work out when things change. So that's what I'd do in my code: int AgeNow(DateTime birthday) { return AgeAt(DateTime.Now, birthday); } int AgeAt(DateTime now, DateTime birthday) { return AgeAt(now, birthday, CultureInfo.CurrentCulture.Calendar); } int AgeAt(DateTime now, DateTime birthday, Calendar calendar) { // My age has increased on the morning of my // birthday even though I was born in the evening. now = now.Date; birthday = birthday.Date; var age = 0; if (now <= birthday) return age; // I am zero now if I am to be born tomorrow. while (calendar.AddYears(birthday, age + 1) <= now) { age++; } return age; } Running this through in LINQPad gives this: PASSED: someone born on 28 February 1964 is age 4 on 28 February 1968 PASSED: someone born on 29 February 1964 is age 3 on 28 February 1968 PASSED: someone born on 31 December 2016 is age 0 on 01 January 2017 Code in LINQPad is here. A: Just use: (DateTime.Now - myDate).TotalHours / 8766.0 The current date - myDate = TimeSpan, get total hours and divide in the total hours per year and get exactly the age/months/days... A: I would strongly recommend using a NuGet package called AgeCalculator since there are many things to consider when calculating age (leap years, time component etc) and only two lines of code does not cut it. The library gives you more than just a year. It even takes into consideration the time component at the calculation so you get an accurate age with years, months, days and time components. It is more advanced giving an option to consider Feb 29 in a leap year as Feb 28 in a non-leap year. A: I've spent some time working on this and came up with this to calculate someone's age in years, months and days. I've tested against the Feb 29th problem and leap years and it seems to work, I'd appreciate any feedback: public void LoopAge(DateTime myDOB, DateTime FutureDate) { int years = 0; int months = 0; int days = 0; DateTime tmpMyDOB = new DateTime(myDOB.Year, myDOB.Month, 1); DateTime tmpFutureDate = new DateTime(FutureDate.Year, FutureDate.Month, 1); while (tmpMyDOB.AddYears(years).AddMonths(months) < tmpFutureDate) { months++; if (months > 12) { years++; months = months - 12; } } if (FutureDate.Day >= myDOB.Day) { days = days + FutureDate.Day - myDOB.Day; } else { months--; if (months < 0) { years--; months = months + 12; } days += DateTime.DaysInMonth( FutureDate.AddMonths(-1).Year, FutureDate.AddMonths(-1).Month ) + FutureDate.Day - myDOB.Day; } //add an extra day if the dob is a leap day if (DateTime.IsLeapYear(myDOB.Year) && myDOB.Month == 2 && myDOB.Day == 29) { //but only if the future date is less than 1st March if (FutureDate >= new DateTime(FutureDate.Year, 3, 1)) days++; } } A: An easy to understand and simple solution. // Save today's date. var today = DateTime.Today; // Calculate the age. var age = today.Year - birthdate.Year; // Go back to the year in which the person was born in case of a leap year if (birthdate.Date > today.AddYears(-age)) age--; However, this assumes you are looking for the western idea of the age and not using East Asian reckoning. A: Keeping it simple (and possibly stupid:)). DateTime birth = new DateTime(1975, 09, 27, 01, 00, 00, 00); TimeSpan ts = DateTime.Now - birth; Console.WriteLine("You are approximately " + ts.TotalSeconds.ToString() + " seconds old."); A: The simplest way I've ever found is this. It works correctly for the US and western europe locales. Can't speak to other locales, especially places like China. 4 extra compares, at most, following the initial computation of age. public int AgeInYears(DateTime birthDate, DateTime referenceDate) { Debug.Assert(referenceDate >= birthDate, "birth date must be on or prior to the reference date"); DateTime birth = birthDate.Date; DateTime reference = referenceDate.Date; int years = (reference.Year - birth.Year); // // an offset of -1 is applied if the birth date has // not yet occurred in the current year. // if (reference.Month > birth.Month); else if (reference.Month < birth.Month) --years; else // in birth month { if (reference.Day < birth.Day) --years; } return years ; } I was looking over the answers to this and noticed that nobody has made reference to regulatory/legal implications of leap day births. For instance, per Wikipedia, if you're born on February 29th in various jurisdictions, you're non-leap year birthday varies: * *In the United Kingdom and Hong Kong: it's the ordinal day of the year, so the next day, March 1st is your birthday. *In New Zealand: it's the previous day, February 28th for the purposes of driver licencing, and March 1st for other purposes. *Taiwan: it's February 28th. And as near as I can tell, in the US, the statutes are silent on the matter, leaving it up to the common law and to how various regulatory bodies define things in their regulations. To that end, an improvement: public enum LeapDayRule { OrdinalDay = 1 , LastDayOfMonth = 2 , } static int ComputeAgeInYears(DateTime birth, DateTime reference, LeapYearBirthdayRule ruleInEffect) { bool isLeapYearBirthday = CultureInfo.CurrentCulture.Calendar.IsLeapDay(birth.Year, birth.Month, birth.Day); DateTime cutoff; if (isLeapYearBirthday && !DateTime.IsLeapYear(reference.Year)) { switch (ruleInEffect) { case LeapDayRule.OrdinalDay: cutoff = new DateTime(reference.Year, 1, 1) .AddDays(birth.DayOfYear - 1); break; case LeapDayRule.LastDayOfMonth: cutoff = new DateTime(reference.Year, birth.Month, 1) .AddMonths(1) .AddDays(-1); break; default: throw new InvalidOperationException(); } } else { cutoff = new DateTime(reference.Year, birth.Month, birth.Day); } int age = (reference.Year - birth.Year) + (reference >= cutoff ? 0 : -1); return age < 0 ? 0 : age; } It should be noted that this code assumes: * *A western (European) reckoning of age, and *A calendar, like the Gregorian calendar that inserts a single leap day at the end of a month. A: Do we need to consider people who is smaller than 1 year? as Chinese culture, we describe small babies' age as 2 months or 4 weeks. Below is my implementation, it is not as simple as what I imagined, especially to deal with date like 2/28. public static string HowOld(DateTime birthday, DateTime now) { if (now < birthday) throw new ArgumentOutOfRangeException("birthday must be less than now."); TimeSpan diff = now - birthday; int diffDays = (int)diff.TotalDays; if (diffDays > 7)//year, month and week { int age = now.Year - birthday.Year; if (birthday > now.AddYears(-age)) age--; if (age > 0) { return age + (age > 1 ? " years" : " year"); } else {// month and week DateTime d = birthday; int diffMonth = 1; while (d.AddMonths(diffMonth) <= now) { diffMonth++; } age = diffMonth-1; if (age == 1 && d.Day > now.Day) age--; if (age > 0) { return age + (age > 1 ? " months" : " month"); } else { age = diffDays / 7; return age + (age > 1 ? " weeks" : " week"); } } } else if (diffDays > 0) { int age = diffDays; return age + (age > 1 ? " days" : " day"); } else { int age = diffDays; return "just born"; } } This implementation has passed below test cases. [TestMethod] public void TestAge() { string age = HowOld(new DateTime(2011, 1, 1), new DateTime(2012, 11, 30)); Assert.AreEqual("1 year", age); age = HowOld(new DateTime(2011, 11, 30), new DateTime(2012, 11, 30)); Assert.AreEqual("1 year", age); age = HowOld(new DateTime(2001, 1, 1), new DateTime(2012, 11, 30)); Assert.AreEqual("11 years", age); age = HowOld(new DateTime(2012, 1, 1), new DateTime(2012, 11, 30)); Assert.AreEqual("10 months", age); age = HowOld(new DateTime(2011, 12, 1), new DateTime(2012, 11, 30)); Assert.AreEqual("11 months", age); age = HowOld(new DateTime(2012, 10, 1), new DateTime(2012, 11, 30)); Assert.AreEqual("1 month", age); age = HowOld(new DateTime(2008, 2, 28), new DateTime(2009, 2, 28)); Assert.AreEqual("1 year", age); age = HowOld(new DateTime(2008, 3, 28), new DateTime(2009, 2, 28)); Assert.AreEqual("11 months", age); age = HowOld(new DateTime(2008, 3, 28), new DateTime(2009, 3, 28)); Assert.AreEqual("1 year", age); age = HowOld(new DateTime(2009, 1, 28), new DateTime(2009, 2, 28)); Assert.AreEqual("1 month", age); age = HowOld(new DateTime(2009, 2, 1), new DateTime(2009, 3, 1)); Assert.AreEqual("1 month", age); // NOTE. // new DateTime(2008, 1, 31).AddMonths(1) == new DateTime(2009, 2, 28); // new DateTime(2008, 1, 28).AddMonths(1) == new DateTime(2009, 2, 28); age = HowOld(new DateTime(2009, 1, 31), new DateTime(2009, 2, 28)); Assert.AreEqual("4 weeks", age); age = HowOld(new DateTime(2009, 2, 1), new DateTime(2009, 2, 28)); Assert.AreEqual("3 weeks", age); age = HowOld(new DateTime(2009, 2, 1), new DateTime(2009, 3, 1)); Assert.AreEqual("1 month", age); age = HowOld(new DateTime(2012, 11, 5), new DateTime(2012, 11, 30)); Assert.AreEqual("3 weeks", age); age = HowOld(new DateTime(2012, 11, 1), new DateTime(2012, 11, 30)); Assert.AreEqual("4 weeks", age); age = HowOld(new DateTime(2012, 11, 20), new DateTime(2012, 11, 30)); Assert.AreEqual("1 week", age); age = HowOld(new DateTime(2012, 11, 25), new DateTime(2012, 11, 30)); Assert.AreEqual("5 days", age); age = HowOld(new DateTime(2012, 11, 29), new DateTime(2012, 11, 30)); Assert.AreEqual("1 day", age); age = HowOld(new DateTime(2012, 11, 30), new DateTime(2012, 11, 30)); Assert.AreEqual("just born", age); age = HowOld(new DateTime(2000, 2, 29), new DateTime(2009, 2, 28)); Assert.AreEqual("8 years", age); age = HowOld(new DateTime(2000, 2, 29), new DateTime(2009, 3, 1)); Assert.AreEqual("9 years", age); Exception e = null; try { age = HowOld(new DateTime(2012, 12, 1), new DateTime(2012, 11, 30)); } catch (ArgumentOutOfRangeException ex) { e = ex; } Assert.IsTrue(e != null); } Hope it's helpful. A: This is not a direct answer, but more of a philosophical reasoning about the problem at hand from a quasi-scientific point of view. I would argue that the question does not specify the unit nor culture in which to measure age, most answers seem to assume an integer annual representation. The SI-unit for time is second, ergo the correct generic answer should be (of course assuming normalized DateTime and taking no regard whatsoever to relativistic effects): var lifeInSeconds = (DateTime.Now.Ticks - then.Ticks)/TickFactor; In the Christian way of calculating age in years: var then = ... // Then, in this case the birthday var now = DateTime.UtcNow; int age = now.Year - then.Year; if (now.AddYears(-age) < then) age--; In finance there is a similar problem when calculating something often referred to as the Day Count Fraction, which roughly is a number of years for a given period. And the age issue is really a time measuring issue. Example for the actual/actual (counting all days "correctly") convention: DateTime start, end = .... // Whatever, assume start is before end double startYearContribution = 1 - (double) start.DayOfYear / (double) (DateTime.IsLeapYear(start.Year) ? 366 : 365); double endYearContribution = (double)end.DayOfYear / (double)(DateTime.IsLeapYear(end.Year) ? 366 : 365); double middleContribution = (double) (end.Year - start.Year - 1); double DCF = startYearContribution + endYearContribution + middleContribution; Another quite common way to measure time generally is by "serializing" (the dude who named this date convention must seriously have been trippin'): DateTime start, end = .... // Whatever, assume start is before end int days = (end - start).Days; I wonder how long we have to go before a relativistic age in seconds becomes more useful than the rough approximation of earth-around-sun-cycles during one's lifetime so far :) Or in other words, when a period must be given a location or a function representing motion for itself to be valid :) A: I've created an Age struct, which looks like this: public struct Age : IEquatable<Age>, IComparable<Age> { private readonly int _years; private readonly int _months; private readonly int _days; public int Years { get { return _years; } } public int Months { get { return _months; } } public int Days { get { return _days; } } public Age( int years, int months, int days ) : this() { _years = years; _months = months; _days = days; } public static Age CalculateAge( DateTime dateOfBirth, DateTime date ) { // Here is some logic that ressembles Mike's solution, although it // also takes into account months & days. // Ommitted for brevity. return new Age (years, months, days); } // Ommited Equality, Comparable, GetHashCode, functionality for brevity. } A: Here is a very simple and easy to follow example. private int CalculateAge() { //get birthdate DateTime dtBirth = Convert.ToDateTime(BirthDatePicker.Value); int byear = dtBirth.Year; int bmonth = dtBirth.Month; int bday = dtBirth.Day; DateTime dtToday = DateTime.Now; int tYear = dtToday.Year; int tmonth = dtToday.Month; int tday = dtToday.Day; int age = tYear - byear; if (bmonth < tmonth) age--; else if (bmonth == tmonth && bday>tday) { age--; } return age; } A: With fewer conversions and UtcNow, this code can take care of someone born on the Feb 29 in a leap year: public int GetAge(DateTime DateOfBirth) { var Now = DateTime.UtcNow; return Now.Year - DateOfBirth.Year - ( ( Now.Month > DateOfBirth.Month || (Now.Month == DateOfBirth.Month && Now.Day >= DateOfBirth.Day) ) ? 0 : 1 ); } A: Just because I don't think the top answer is that clear: public static int GetAgeByLoop(DateTime birthday) { var age = -1; for (var date = birthday; date < DateTime.Today; date = date.AddYears(1)) { age++; } return age; } A: I would simply do this: DateTime birthDay = new DateTime(1990, 05, 23); DateTime age = DateTime.Now - birthDay; This way you can calculate the exact age of a person, down to the millisecond if you want. A: Simple Code var birthYear=1993; var age = DateTime.Now.AddYears(-birthYear).Year; A: Here is the simplest way to calculate someone's age. Calculating someone's age is pretty straightforward, and here's how! In order for the code to work, you need a DateTime object called BirthDate containing the birthday. C# // get the difference in years int years = DateTime.Now.Year - BirthDate.Year; // subtract another year if we're before the // birth day in the current year if (DateTime.Now.Month < BirthDate.Month || (DateTime.Now.Month == BirthDate.Month && DateTime.Now.Day < BirthDate.Day)) years--; VB.NET ' get the difference in years Dim years As Integer = DateTime.Now.Year - BirthDate.Year ' subtract another year if we're before the ' birth day in the current year If DateTime.Now.Month < BirthDate.Month Or (DateTime.Now.Month = BirthDate.Month And DateTime.Now.Day < BirthDate.Day) Then years = years - 1 End If A: var birthDate = ... // DOB var resultDate = DateTime.Now - birthDate; Using resultDate you can apply TimeSpan properties whatever you want to display it. A: TimeSpan diff = DateTime.Now - birthdayDateTime; string age = String.Format("{0:%y} years, {0:%M} months, {0:%d}, days old", diff); I'm not sure how exactly you'd like it returned to you, so I just made a readable string. A: Here is a solution. DateTime dateOfBirth = new DateTime(2000, 4, 18); DateTime currentDate = DateTime.Now; int ageInYears = 0; int ageInMonths = 0; int ageInDays = 0; ageInDays = currentDate.Day - dateOfBirth.Day; ageInMonths = currentDate.Month - dateOfBirth.Month; ageInYears = currentDate.Year - dateOfBirth.Year; if (ageInDays < 0) { ageInDays += DateTime.DaysInMonth(currentDate.Year, currentDate.Month); ageInMonths = ageInMonths--; if (ageInMonths < 0) { ageInMonths += 12; ageInYears--; } } if (ageInMonths < 0) { ageInMonths += 12; ageInYears--; } Console.WriteLine("{0}, {1}, {2}", ageInYears, ageInMonths, ageInDays); A: This is one of the most accurate answers that is able to resolve the birthday of 29th of Feb compared to any year of 28th Feb. public int GetAge(DateTime birthDate) { int age = DateTime.Now.Year - birthDate.Year; if (birthDate.DayOfYear > DateTime.Now.DayOfYear) age--; return age; } A: I have a customized method to calculate age, plus a bonus validation message just in case it helps: public void GetAge(DateTime dob, DateTime now, out int years, out int months, out int days) { years = 0; months = 0; days = 0; DateTime tmpdob = new DateTime(dob.Year, dob.Month, 1); DateTime tmpnow = new DateTime(now.Year, now.Month, 1); while (tmpdob.AddYears(years).AddMonths(months) < tmpnow) { months++; if (months > 12) { years++; months = months - 12; } } if (now.Day >= dob.Day) days = days + now.Day - dob.Day; else { months--; if (months < 0) { years--; months = months + 12; } days += DateTime.DaysInMonth(now.AddMonths(-1).Year, now.AddMonths(-1).Month) + now.Day - dob.Day; } if (DateTime.IsLeapYear(dob.Year) && dob.Month == 2 && dob.Day == 29 && now >= new DateTime(now.Year, 3, 1)) days++; } private string ValidateDate(DateTime dob) //This method will validate the date { int Years = 0; int Months = 0; int Days = 0; GetAge(dob, DateTime.Now, out Years, out Months, out Days); if (Years < 18) message = Years + " is too young. Please try again on your 18th birthday."; else if (Years >= 65) message = Years + " is too old. Date of Birth must not be 65 or older."; else return null; //Denotes validation passed } Method call here and pass out datetime value (MM/dd/yyyy if server set to USA locale). Replace this with anything a messagebox or any container to display: DateTime dob = DateTime.Parse("03/10/1982"); string message = ValidateDate(dob); lbldatemessage.Visible = !StringIsNullOrWhitespace(message); lbldatemessage.Text = message ?? ""; //Ternary if message is null then default to empty string Remember you can format the message any way you like. A: How about this solution? static string CalcAge(DateTime birthDay) { DateTime currentDate = DateTime.Now; int approximateAge = currentDate.Year - birthDay.Year; int daysToNextBirthDay = (birthDay.Month * 30 + birthDay.Day) - (currentDate.Month * 30 + currentDate.Day) ; if (approximateAge == 0 || approximateAge == 1) { int month = Math.Abs(daysToNextBirthDay / 30); int days = Math.Abs(daysToNextBirthDay % 30); if (month == 0) return "Your age is: " + daysToNextBirthDay + " days"; return "Your age is: " + month + " months and " + days + " days"; ; } if (daysToNextBirthDay > 0) return "Your age is: " + --approximateAge + " Years"; return "Your age is: " + approximateAge + " Years"; ; } A: private int GetAge(int _year, int _month, int _day { DateTime yourBirthDate= new DateTime(_year, _month, _day); DateTime todaysDateTime = DateTime.Today; int noOfYears = todaysDateTime.Year - yourBirthDate.Year; if (DateTime.Now.Month < yourBirthDate.Month || (DateTime.Now.Month == yourBirthDate.Month && DateTime.Now.Day < yourBirthDate.Day)) { noOfYears--; } return noOfYears; } A: This classic question is deserving of a Noda Time solution. static int GetAge(LocalDate dateOfBirth) { Instant now = SystemClock.Instance.Now; // The target time zone is important. // It should align with the *current physical location* of the person // you are talking about. When the whereabouts of that person are unknown, // then you use the time zone of the person who is *asking* for the age. // The time zone of birth is irrelevant! DateTimeZone zone = DateTimeZoneProviders.Tzdb["America/New_York"]; LocalDate today = now.InZone(zone).Date; Period period = Period.Between(dateOfBirth, today, PeriodUnits.Years); return (int) period.Years; } Usage: LocalDate dateOfBirth = new LocalDate(1976, 8, 27); int age = GetAge(dateOfBirth); You might also be interested in the following improvements: * *Passing in the clock as an IClock, instead of using SystemClock.Instance, would improve testability. *The target time zone will likely change, so you'd want a DateTimeZone parameter as well. See also my blog post on this subject: Handling Birthdays, and Other Anniversaries A: The simple answer to this is to apply AddYears as shown below because this is the only native method to add years to the 29th of Feb. of leap years and obtain the correct result of the 28th of Feb. for common years. Some feel that 1th of Mar. is the birthday of leaplings but neither .Net nor any official rule supports this, nor does common logic explain why some born in February should have 75% of their birthdays in another month. Further, an Age method lends itself to be added as an extension to DateTime. By this you can obtain the age in the simplest possible way: * *List item int age = birthDate.Age(); public static class DateTimeExtensions { /// <summary> /// Calculates the age in years of the current System.DateTime object today. /// </summary> /// <param name="birthDate">The date of birth</param> /// <returns>Age in years today. 0 is returned for a future date of birth.</returns> public static int Age(this DateTime birthDate) { return Age(birthDate, DateTime.Today); } /// <summary> /// Calculates the age in years of the current System.DateTime object on a later date. /// </summary> /// <param name="birthDate">The date of birth</param> /// <param name="laterDate">The date on which to calculate the age.</param> /// <returns>Age in years on a later day. 0 is returned as minimum.</returns> public static int Age(this DateTime birthDate, DateTime laterDate) { int age; age = laterDate.Year - birthDate.Year; if (age > 0) { age -= Convert.ToInt32(laterDate.Date < birthDate.Date.AddYears(age)); } else { age = 0; } return age; } } Now, run this test: class Program { static void Main(string[] args) { RunTest(); } private static void RunTest() { DateTime birthDate = new DateTime(2000, 2, 28); DateTime laterDate = new DateTime(2011, 2, 27); string iso = "yyyy-MM-dd"; for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++) { Console.WriteLine("Birth date: " + birthDate.AddDays(i).ToString(iso) + " Later date: " + laterDate.AddDays(j).ToString(iso) + " Age: " + birthDate.AddDays(i).Age(laterDate.AddDays(j)).ToString()); } } Console.ReadKey(); } } The critical date example is this: Birth date: 2000-02-29 Later date: 2011-02-28 Age: 11 Output: { Birth date: 2000-02-28 Later date: 2011-02-27 Age: 10 Birth date: 2000-02-28 Later date: 2011-02-28 Age: 11 Birth date: 2000-02-28 Later date: 2011-03-01 Age: 11 Birth date: 2000-02-29 Later date: 2011-02-27 Age: 10 Birth date: 2000-02-29 Later date: 2011-02-28 Age: 11 Birth date: 2000-02-29 Later date: 2011-03-01 Age: 11 Birth date: 2000-03-01 Later date: 2011-02-27 Age: 10 Birth date: 2000-03-01 Later date: 2011-02-28 Age: 10 Birth date: 2000-03-01 Later date: 2011-03-01 Age: 11 } And for the later date 2012-02-28: { Birth date: 2000-02-28 Later date: 2012-02-28 Age: 12 Birth date: 2000-02-28 Later date: 2012-02-29 Age: 12 Birth date: 2000-02-28 Later date: 2012-03-01 Age: 12 Birth date: 2000-02-29 Later date: 2012-02-28 Age: 11 Birth date: 2000-02-29 Later date: 2012-02-29 Age: 12 Birth date: 2000-02-29 Later date: 2012-03-01 Age: 12 Birth date: 2000-03-01 Later date: 2012-02-28 Age: 11 Birth date: 2000-03-01 Later date: 2012-02-29 Age: 11 Birth date: 2000-03-01 Later date: 2012-03-01 Age: 12 } A: This is a strange way to do it, but if you format the date to yyyymmdd and subtract the date of birth from the current date then drop the last 4 digits you've got the age :) I don't know C#, but I believe this will work in any language. 20080814 - 19800703 = 280111 Drop the last 4 digits = 28. C# Code: int now = int.Parse(DateTime.Now.ToString("yyyyMMdd")); int dob = int.Parse(dateOfBirth.ToString("yyyyMMdd")); int age = (now - dob) / 10000; Or alternatively without all the type conversion in the form of an extension method. Error checking omitted: public static Int32 GetAge(this DateTime dateOfBirth) { var today = DateTime.Today; var a = (today.Year * 100 + today.Month) * 100 + today.Day; var b = (dateOfBirth.Year * 100 + dateOfBirth.Month) * 100 + dateOfBirth.Day; return (a - b) / 10000; } A: The following approach (extract from Time Period Library for .NET class DateDiff) considers the calendar of the culture info: // ---------------------------------------------------------------------- private static int YearDiff( DateTime date1, DateTime date2 ) { return YearDiff( date1, date2, DateTimeFormatInfo.CurrentInfo.Calendar ); } // YearDiff // ---------------------------------------------------------------------- private static int YearDiff( DateTime date1, DateTime date2, Calendar calendar ) { if ( date1.Equals( date2 ) ) { return 0; } int year1 = calendar.GetYear( date1 ); int month1 = calendar.GetMonth( date1 ); int year2 = calendar.GetYear( date2 ); int month2 = calendar.GetMonth( date2 ); // find the the day to compare int compareDay = date2.Day; int compareDaysPerMonth = calendar.GetDaysInMonth( year1, month1 ); if ( compareDay > compareDaysPerMonth ) { compareDay = compareDaysPerMonth; } // build the compare date DateTime compareDate = new DateTime( year1, month2, compareDay, date2.Hour, date2.Minute, date2.Second, date2.Millisecond ); if ( date2 > date1 ) { if ( compareDate < date1 ) { compareDate = compareDate.AddYears( 1 ); } } else { if ( compareDate > date1 ) { compareDate = compareDate.AddYears( -1 ); } } return year2 - calendar.GetYear( compareDate ); } // YearDiff Usage: // ---------------------------------------------------------------------- public void CalculateAgeSamples() { PrintAge( new DateTime( 2000, 02, 29 ), new DateTime( 2009, 02, 28 ) ); // > Birthdate=29.02.2000, Age at 28.02.2009 is 8 years PrintAge( new DateTime( 2000, 02, 29 ), new DateTime( 2012, 02, 28 ) ); // > Birthdate=29.02.2000, Age at 28.02.2012 is 11 years } // CalculateAgeSamples // ---------------------------------------------------------------------- public void PrintAge( DateTime birthDate, DateTime moment ) { Console.WriteLine( "Birthdate={0:d}, Age at {1:d} is {2} years", birthDate, moment, YearDiff( birthDate, moment ) ); } // PrintAge A: SQL version: declare @dd smalldatetime = '1980-04-01' declare @age int = YEAR(GETDATE())-YEAR(@dd) if (@dd> DATEADD(YYYY, -@age, GETDATE())) set @age = @age -1 print @age A: How come the MSDN help did not tell you that? It looks so obvious: System.DateTime birthTime = AskTheUser(myUser); // :-) System.DateTime now = System.DateTime.Now; System.TimeSpan age = now - birthTime; // As simple as that double ageInDays = age.TotalDays; // Will you convert to whatever you want yourself? A: Simple and readable with complementary method public static int getAge(DateTime birthDate) { var today = DateTime.Today; var age = today.Year - birthDate.Year; var monthDiff = today.Month - birthDate.Month; var dayDiff = today.Day - birthDate.Day; if (dayDiff < 0) { monthDiff--; } if (monthDiff < 0) { age--; } return age; } A: var EndDate = new DateTime(2022, 4, 21); var StartDate = new DateTime(1986, 4, 25); Int32 Months = EndDate.Month - StartDate.Month; Int32 Years = EndDate.Year - StartDate.Year; Int32 Days = EndDate.Day - StartDate.Day; if (Days < 0) { Months = Months - 1; } if (Months < 0) { Years = Years - 1; Months = Months + 12; } string Ages = Years.ToString() + " Year(s) " + Months.ToString() + " Month(s) "; A: Here's an answer making use of DateTimeOffset and manual math: var diff = DateTimeOffset.Now - dateOfBirth; var sinceEpoch = DateTimeOffset.UnixEpoch + diff; return sinceEpoch.Year - 1970; A: To calculate how many years old a person is, DateTime dateOfBirth; int ageInYears = DateTime.Now.Year - dateOfBirth.Year; if (dateOfBirth > today.AddYears(-ageInYears )) ageInYears --; A: Very simple answer DateTime dob = new DateTime(1991, 3, 4); DateTime now = DateTime.Now; int dobDay = dob.Day, dobMonth = dob.Month; int add = -1; if (dobMonth < now.Month) { add = 0; } else if (dobMonth == now.Month) { if(dobDay <= now.Day) { add = 0; } else { add = -1; } } else { add = -1; } int age = now.Year - dob.Year + add; A: int Age = new DateTime((DateTime.Now - BirthDate).Ticks).Year -1; Console.WriteLine("Age {0}", Age); A: I have no knowledge about DateTime but all I can do is this: using System; public class Program { public static int getAge(int month, int day, int year) { DateTime today = DateTime.Today; int currentDay = today.Day; int currentYear = today.Year; int currentMonth = today.Month; int age = 0; if (currentMonth < month) { age -= 1; } else if (currentMonth == month) { if (currentDay < day) { age -= 1; } } currentYear -= year; age += currentYear; return age; } public static void Main() { int ageInYears = getAge(8, 10, 2007); Console.WriteLine(ageInYears); } } A little confusing, but looking at the code more carefully, it will all make sense. A: var startDate = new DateTime(2015, 04, 05);//your start date var endDate = DateTime.Now; var years = 0; while(startDate < endDate) { startDate = startDate.AddYears(1); if(startDate < endDate) { years++; } } A: One could compute 'age' (i.e. the 'westerner' way) this way: public static int AgeInYears(this System.DateTime source, System.DateTime target) => target.Year - source.Year is int age && age > 0 && source.AddYears(age) > target ? age - 1 : age < 0 && source.AddYears(age) < target ? age + 1 : age; If the direction of time is 'negative', the age will be negative also. One can add a fraction, which represents the amount of age accumulated from target to the next birthday: public static double AgeInTotalYears(this System.DateTime source, System.DateTime target) { var sign = (source <= target ? 1 : -1); var ageInYears = AgeInYears(source, target); // The method above. var last = source.AddYears(ageInYears); var next = source.AddYears(ageInYears + sign); var fractionalAge = (double)(target - last).Ticks / (double)(next - last).Ticks * sign; return ageInYears + fractionalAge; } The fraction is the ratio of passed time (from the last birthday) over total time (to the next birthday). Both of the methods work the same way whether going forward or backward in time. A: This is a very simple approach: int Age = DateTime.Today.Year - new DateTime(2000, 1, 1).Year; A: A branchless solution: public int GetAge(DateOnly birthDate, DateOnly today) { return today.Year - birthDate.Year + (((today.Month << 5) + today.Day - ((birthDate.Month << 5) + birthDate.Day)) >> 31); } A: Don't know why nobody tried this: ushort age = (ushort)DateAndTime.DateDiff(DateInterval.Year, DateTime.Now.Date, birthdate); All it requires is using Microsoft.VisualBasic; and reference to this assembly in the project (if not already referred). A: Why can't be simplified to check birth month and day? First line (var year = end.Year - start.Year - 1;): assums that birthdate is not yet occurred on end year. Then check the month and day to see if it is occurred; add one more year. No special treatment on leap year scenario. You cannot create a date (feb 29) as end date if it is not a leap year, so birthdate celebration will be counted if end date is on march 1th, not on deb 28th. The function below will cover this scenario as ordinary date. static int Get_Age(DateTime start, DateTime end) { var year = end.Year - start.Year - 1; if (end.Month < start.Month) return year; else if (end.Month == start.Month) { if (end.Day >= start.Day) return ++year; return year; } else return ++year; } static void Test_Get_Age() { var start = new DateTime(2008, 4, 10); // b-date, leap year BTY var end = new DateTime(2023, 2, 1); // end date is before the b-date var result1 = Get_Age(start, end); var success1 = result1 == 14; // true end = new DateTime(2023, 4, 10); // end date is on the b-date var result2 = Get_Age(start, end); var success2 = result2 == 15; // true end = new DateTime(2023, 6, 22); // end date is after the b-date var result3 = Get_Age(start, end); var success3 = result3 == 15; // true start = new DateTime(2008, 2, 29); // b-date is on feb 29 end = new DateTime(2023, 2, 28); // end date is before the b-date var result4 = Get_Age(start, end); var success4 = result4 == 14; // true end = new DateTime(2020, 2, 29); // end date is on the b-date, on another leap year var result5 = Get_Age(start, end); var success5 = result5 == 12; // true } A: A one-liner answer: DateTime dateOfBirth = Convert.ToDateTime("01/16/1990"); var age = ((DateTime.Now - dateOfBirth).Days) / 365; A: To calculate the age with nearest age: var ts = DateTime.Now - new DateTime(1988, 3, 19); var age = Math.Round(ts.Days / 365.0); A: Check this out: TimeSpan ts = DateTime.Now.Subtract(Birthdate); age = (byte)(ts.TotalDays / 365.25); A: I think this problem can be solved with an easier way like this- The class can be like- using System; namespace TSA { class BirthDay { double ageDay; public BirthDay(int day, int month, int year) { DateTime birthDate = new DateTime(year, month, day); ageDay = (birthDate - DateTime.Now).TotalDays; //DateTime.UtcNow } internal int GetAgeYear() { return (int)Math.Truncate(ageDay / 365); } internal int GetAgeMonth() { return (int)Math.Truncate((ageDay % 365) / 30); } } } And calls can be like- BirthDay b = new BirthDay(1,12,1990); int year = b.GetAgeYear(); int month = b.GetAgeMonth();
{ "language": "en", "url": "https://stackoverflow.com/questions/9", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2199" }
Q: Calculate relative time in C# Given a specific DateTime value, how do I display relative time, like: * *2 hours ago *3 days ago *a month ago A: public static string RelativeDate(DateTime theDate) { Dictionary<long, string> thresholds = new Dictionary<long, string>(); int minute = 60; int hour = 60 * minute; int day = 24 * hour; thresholds.Add(60, "{0} seconds ago"); thresholds.Add(minute * 2, "a minute ago"); thresholds.Add(45 * minute, "{0} minutes ago"); thresholds.Add(120 * minute, "an hour ago"); thresholds.Add(day, "{0} hours ago"); thresholds.Add(day * 2, "yesterday"); thresholds.Add(day * 30, "{0} days ago"); thresholds.Add(day * 365, "{0} months ago"); thresholds.Add(long.MaxValue, "{0} years ago"); long since = (DateTime.Now.Ticks - theDate.Ticks) / 10000000; foreach (long threshold in thresholds.Keys) { if (since < threshold) { TimeSpan t = new TimeSpan((DateTime.Now.Ticks - theDate.Ticks)); return string.Format(thresholds[threshold], (t.Days > 365 ? t.Days / 365 : (t.Days > 0 ? t.Days : (t.Hours > 0 ? t.Hours : (t.Minutes > 0 ? t.Minutes : (t.Seconds > 0 ? t.Seconds : 0))))).ToString()); } } return ""; } I prefer this version for its conciseness, and ability to add in new tick points. This could be encapsulated with a Latest() extension to Timespan instead of that long 1 liner, but for the sake of brevity in posting, this will do. This fixes the an hour ago, 1 hours ago, by providing an hour until 2 hours have elapsed A: I got this answer from one of Bill Gates' blogs. I need to find it on my browser history and I'll give you the link. The Javascript code to do the same thing (as requested): function posted(t) { var now = new Date(); var diff = parseInt((now.getTime() - Date.parse(t)) / 1000); if (diff < 60) { return 'less than a minute ago'; } else if (diff < 120) { return 'about a minute ago'; } else if (diff < (2700)) { return (parseInt(diff / 60)).toString() + ' minutes ago'; } else if (diff < (5400)) { return 'about an hour ago'; } else if (diff < (86400)) { return 'about ' + (parseInt(diff / 3600)).toString() + ' hours ago'; } else if (diff < (172800)) { return '1 day ago'; } else {return (parseInt(diff / 86400)).toString() + ' days ago'; } } Basically, you work in terms of seconds. A: Java for client-side gwt usage: import java.util.Date; public class RelativeDateFormat { private static final long ONE_MINUTE = 60000L; private static final long ONE_HOUR = 3600000L; private static final long ONE_DAY = 86400000L; private static final long ONE_WEEK = 604800000L; public static String format(Date date) { long delta = new Date().getTime() - date.getTime(); if (delta < 1L * ONE_MINUTE) { return toSeconds(delta) == 1 ? "one second ago" : toSeconds(delta) + " seconds ago"; } if (delta < 2L * ONE_MINUTE) { return "one minute ago"; } if (delta < 45L * ONE_MINUTE) { return toMinutes(delta) + " minutes ago"; } if (delta < 90L * ONE_MINUTE) { return "one hour ago"; } if (delta < 24L * ONE_HOUR) { return toHours(delta) + " hours ago"; } if (delta < 48L * ONE_HOUR) { return "yesterday"; } if (delta < 30L * ONE_DAY) { return toDays(delta) + " days ago"; } if (delta < 12L * 4L * ONE_WEEK) { long months = toMonths(delta); return months <= 1 ? "one month ago" : months + " months ago"; } else { long years = toYears(delta); return years <= 1 ? "one year ago" : years + " years ago"; } } private static long toSeconds(long date) { return date / 1000L; } private static long toMinutes(long date) { return toSeconds(date) / 60L; } private static long toHours(long date) { return toMinutes(date) / 60L; } private static long toDays(long date) { return toHours(date) / 24L; } private static long toMonths(long date) { return toDays(date) / 30L; } private static long toYears(long date) { return toMonths(date) / 365L; } } A: I think there is already a number of answers related to this post, but one can use this which is easy to use just like plugin and also easily readable for programmers. Send your specific date, and get its value in string form: public string RelativeDateTimeCount(DateTime inputDateTime) { string outputDateTime = string.Empty; TimeSpan ts = DateTime.Now - inputDateTime; if (ts.Days > 7) { outputDateTime = inputDateTime.ToString("MMMM d, yyyy"); } else if (ts.Days > 0) { outputDateTime = ts.Days == 1 ? ("about 1 Day ago") : ("about " + ts.Days.ToString() + " Days ago"); } else if (ts.Hours > 0) { outputDateTime = ts.Hours == 1 ? ("an hour ago") : (ts.Hours.ToString() + " hours ago"); } else if (ts.Minutes > 0) { outputDateTime = ts.Minutes == 1 ? ("1 minute ago") : (ts.Minutes.ToString() + " minutes ago"); } else outputDateTime = "few seconds ago"; return outputDateTime; } A: public static string ToRelativeDate(DateTime input) { TimeSpan oSpan = DateTime.Now.Subtract(input); double TotalMinutes = oSpan.TotalMinutes; string Suffix = " ago"; if (TotalMinutes < 0.0) { TotalMinutes = Math.Abs(TotalMinutes); Suffix = " from now"; } var aValue = new SortedList<double, Func<string>>(); aValue.Add(0.75, () => "less than a minute"); aValue.Add(1.5, () => "about a minute"); aValue.Add(45, () => string.Format("{0} minutes", Math.Round(TotalMinutes))); aValue.Add(90, () => "about an hour"); aValue.Add(1440, () => string.Format("about {0} hours", Math.Round(Math.Abs(oSpan.TotalHours)))); // 60 * 24 aValue.Add(2880, () => "a day"); // 60 * 48 aValue.Add(43200, () => string.Format("{0} days", Math.Floor(Math.Abs(oSpan.TotalDays)))); // 60 * 24 * 30 aValue.Add(86400, () => "about a month"); // 60 * 24 * 60 aValue.Add(525600, () => string.Format("{0} months", Math.Floor(Math.Abs(oSpan.TotalDays / 30)))); // 60 * 24 * 365 aValue.Add(1051200, () => "about a year"); // 60 * 24 * 365 * 2 aValue.Add(double.MaxValue, () => string.Format("{0} years", Math.Floor(Math.Abs(oSpan.TotalDays / 365)))); return aValue.First(n => TotalMinutes < n.Key).Value.Invoke() + Suffix; } http://refactormycode.com/codes/493-twitter-esque-relative-dates C# 6 version: static readonly SortedList<double, Func<TimeSpan, string>> offsets = new SortedList<double, Func<TimeSpan, string>> { { 0.75, _ => "less than a minute"}, { 1.5, _ => "about a minute"}, { 45, x => $"{x.TotalMinutes:F0} minutes"}, { 90, x => "about an hour"}, { 1440, x => $"about {x.TotalHours:F0} hours"}, { 2880, x => "a day"}, { 43200, x => $"{x.TotalDays:F0} days"}, { 86400, x => "about a month"}, { 525600, x => $"{x.TotalDays / 30:F0} months"}, { 1051200, x => "about a year"}, { double.MaxValue, x => $"{x.TotalDays / 365:F0} years"} }; public static string ToRelativeDate(this DateTime input) { TimeSpan x = DateTime.Now - input; string Suffix = x.TotalMinutes > 0 ? " ago" : " from now"; x = new TimeSpan(Math.Abs(x.Ticks)); return offsets.First(n => x.TotalMinutes < n.Key).Value(x) + Suffix; } A: Here a rewrite from Jeffs Script for PHP: define("SECOND", 1); define("MINUTE", 60 * SECOND); define("HOUR", 60 * MINUTE); define("DAY", 24 * HOUR); define("MONTH", 30 * DAY); function relativeTime($time) { $delta = time() - $time; if ($delta < 1 * MINUTE) { return $delta == 1 ? "one second ago" : $delta . " seconds ago"; } if ($delta < 2 * MINUTE) { return "a minute ago"; } if ($delta < 45 * MINUTE) { return floor($delta / MINUTE) . " minutes ago"; } if ($delta < 90 * MINUTE) { return "an hour ago"; } if ($delta < 24 * HOUR) { return floor($delta / HOUR) . " hours ago"; } if ($delta < 48 * HOUR) { return "yesterday"; } if ($delta < 30 * DAY) { return floor($delta / DAY) . " days ago"; } if ($delta < 12 * MONTH) { $months = floor($delta / DAY / 30); return $months <= 1 ? "one month ago" : $months . " months ago"; } else { $years = floor($delta / DAY / 365); return $years <= 1 ? "one year ago" : $years . " years ago"; } } A: var ts = new TimeSpan(DateTime.Now.Ticks - dt.Ticks); A: If you want to have an output like "2 days, 4 hours and 12 minutes ago", you need a timespan: TimeSpan timeDiff = DateTime.Now-CreatedDate; Then you can access the values you like: timeDiff.Days timeDiff.Hours etc... A: Here's an implementation I added as an extension method to the DateTime class that handles both future and past dates and provides an approximation option that allows you to specify the level of detail you're looking for ("3 hour ago" vs "3 hours, 23 minutes, 12 seconds ago"): using System.Text; /// <summary> /// Compares a supplied date to the current date and generates a friendly English /// comparison ("5 days ago", "5 days from now") /// </summary> /// <param name="date">The date to convert</param> /// <param name="approximate">When off, calculate timespan down to the second. /// When on, approximate to the largest round unit of time.</param> /// <returns></returns> public static string ToRelativeDateString(this DateTime value, bool approximate) { StringBuilder sb = new StringBuilder(); string suffix = (value > DateTime.Now) ? " from now" : " ago"; TimeSpan timeSpan = new TimeSpan(Math.Abs(DateTime.Now.Subtract(value).Ticks)); if (timeSpan.Days > 0) { sb.AppendFormat("{0} {1}", timeSpan.Days, (timeSpan.Days > 1) ? "days" : "day"); if (approximate) return sb.ToString() + suffix; } if (timeSpan.Hours > 0) { sb.AppendFormat("{0}{1} {2}", (sb.Length > 0) ? ", " : string.Empty, timeSpan.Hours, (timeSpan.Hours > 1) ? "hours" : "hour"); if (approximate) return sb.ToString() + suffix; } if (timeSpan.Minutes > 0) { sb.AppendFormat("{0}{1} {2}", (sb.Length > 0) ? ", " : string.Empty, timeSpan.Minutes, (timeSpan.Minutes > 1) ? "minutes" : "minute"); if (approximate) return sb.ToString() + suffix; } if (timeSpan.Seconds > 0) { sb.AppendFormat("{0}{1} {2}", (sb.Length > 0) ? ", " : string.Empty, timeSpan.Seconds, (timeSpan.Seconds > 1) ? "seconds" : "second"); if (approximate) return sb.ToString() + suffix; } if (sb.Length == 0) return "right now"; sb.Append(suffix); return sb.ToString(); } A: I would provide some handy extensions methods for this and make the code more readable. First, couple of extension methods for Int32. public static class TimeSpanExtensions { public static TimeSpan Days(this int value) { return new TimeSpan(value, 0, 0, 0); } public static TimeSpan Hours(this int value) { return new TimeSpan(0, value, 0, 0); } public static TimeSpan Minutes(this int value) { return new TimeSpan(0, 0, value, 0); } public static TimeSpan Seconds(this int value) { return new TimeSpan(0, 0, 0, value); } public static TimeSpan Milliseconds(this int value) { return new TimeSpan(0, 0, 0, 0, value); } public static DateTime Ago(this TimeSpan value) { return DateTime.Now - value; } } Then, one for DateTime. public static class DateTimeExtensions { public static DateTime Ago(this DateTime dateTime, TimeSpan delta) { return dateTime - delta; } } Now, you can do something like below: var date = DateTime.Now; date.Ago(2.Days()); // 2 days ago date.Ago(7.Hours()); // 7 hours ago date.Ago(567.Milliseconds()); // 567 milliseconds ago A: There are also a package called Humanizr on Nuget, and it actually works really well, and is in the .NET Foundation. DateTime.UtcNow.AddHours(-30).Humanize() => "yesterday" DateTime.UtcNow.AddHours(-2).Humanize() => "2 hours ago" DateTime.UtcNow.AddHours(30).Humanize() => "tomorrow" DateTime.UtcNow.AddHours(2).Humanize() => "2 hours from now" TimeSpan.FromMilliseconds(1299630020).Humanize() => "2 weeks" TimeSpan.FromMilliseconds(1299630020).Humanize(3) => "2 weeks, 1 day, 1 hour" Scott Hanselman has a writeup on it on his blog A: /** * {@code date1} has to be earlier than {@code date2}. */ public static String relativize(Date date1, Date date2) { assert date2.getTime() >= date1.getTime(); long duration = date2.getTime() - date1.getTime(); long converted; if ((converted = TimeUnit.MILLISECONDS.toDays(duration)) > 0) { return String.format("%d %s ago", converted, converted == 1 ? "day" : "days"); } else if ((converted = TimeUnit.MILLISECONDS.toHours(duration)) > 0) { return String.format("%d %s ago", converted, converted == 1 ? "hour" : "hours"); } else if ((converted = TimeUnit.MILLISECONDS.toMinutes(duration)) > 0) { return String.format("%d %s ago", converted, converted == 1 ? "minute" : "minutes"); } else if ((converted = TimeUnit.MILLISECONDS.toSeconds(duration)) > 0) { return String.format("%d %s ago", converted, converted == 1 ? "second" : "seconds"); } else { return "just now"; } } A: Turkish localized version of Vincents answer. const int SECOND = 1; const int MINUTE = 60 * SECOND; const int HOUR = 60 * MINUTE; const int DAY = 24 * HOUR; const int MONTH = 30 * DAY; var ts = new TimeSpan(DateTime.UtcNow.Ticks - yourDate.Ticks); double delta = Math.Abs(ts.TotalSeconds); if (delta < 1 * MINUTE) return ts.Seconds + " saniye önce"; if (delta < 45 * MINUTE) return ts.Minutes + " dakika önce"; if (delta < 24 * HOUR) return ts.Hours + " saat önce"; if (delta < 48 * HOUR) return "dün"; if (delta < 30 * DAY) return ts.Days + " gün önce"; if (delta < 12 * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months + " ay önce"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years + " yıl önce"; } A: jquery.timeago plugin Jeff, because Stack Overflow uses jQuery extensively, I recommend the jquery.timeago plugin. Benefits: * *Avoid timestamps dated "1 minute ago" even though the page was opened 10 minutes ago; timeago refreshes automatically. *You can take full advantage of page and/or fragment caching in your web applications, because the timestamps aren't calculated on the server. *You get to use microformats like the cool kids. Just attach it to your timestamps on DOM ready: jQuery(document).ready(function() { jQuery('abbr.timeago').timeago(); }); This will turn all abbr elements with a class of timeago and an ISO 8601 timestamp in the title: <abbr class="timeago" title="2008-07-17T09:24:17Z">July 17, 2008</abbr> into something like this: <abbr class="timeago" title="July 17, 2008">4 months ago</abbr> which yields: 4 months ago. As time passes, the timestamps will automatically update. Disclaimer: I wrote this plugin, so I'm biased. A: I would recommend computing this on the client side too. Less work for the server. The following is the version that I use (from Zach Leatherman) /* * Javascript Humane Dates * Copyright (c) 2008 Dean Landolt (deanlandolt.com) * Re-write by Zach Leatherman (zachleat.com) * * Adopted from the John Resig's pretty.js * at http://ejohn.org/blog/javascript-pretty-date * and henrah's proposed modification * at http://ejohn.org/blog/javascript-pretty-date/#comment-297458 * * Licensed under the MIT license. */ function humane_date(date_str){ var time_formats = [ [60, 'just now'], [90, '1 minute'], // 60*1.5 [3600, 'minutes', 60], // 60*60, 60 [5400, '1 hour'], // 60*60*1.5 [86400, 'hours', 3600], // 60*60*24, 60*60 [129600, '1 day'], // 60*60*24*1.5 [604800, 'days', 86400], // 60*60*24*7, 60*60*24 [907200, '1 week'], // 60*60*24*7*1.5 [2628000, 'weeks', 604800], // 60*60*24*(365/12), 60*60*24*7 [3942000, '1 month'], // 60*60*24*(365/12)*1.5 [31536000, 'months', 2628000], // 60*60*24*365, 60*60*24*(365/12) [47304000, '1 year'], // 60*60*24*365*1.5 [3153600000, 'years', 31536000], // 60*60*24*365*100, 60*60*24*365 [4730400000, '1 century'] // 60*60*24*365*100*1.5 ]; var time = ('' + date_str).replace(/-/g,"/").replace(/[TZ]/g," "), dt = new Date, seconds = ((dt - new Date(time) + (dt.getTimezoneOffset() * 60000)) / 1000), token = ' ago', i = 0, format; if (seconds < 0) { seconds = Math.abs(seconds); token = ''; } while (format = time_formats[i++]) { if (seconds < format[0]) { if (format.length == 2) { return format[1] + (i > 1 ? token : ''); // Conditional so we don't return Just Now Ago } else { return Math.round(seconds / format[2]) + ' ' + format[1] + (i > 1 ? token : ''); } } } // overflow for centuries if(seconds > 4730400000) return Math.round(seconds / 4730400000) + ' centuries' + token; return date_str; }; if(typeof jQuery != 'undefined') { jQuery.fn.humane_dates = function(){ return this.each(function(){ var date = humane_date(this.title); if(date && jQuery(this).text() != date) // don't modify the dom if we don't have to jQuery(this).text(date); }); }; } A: Here's how I do it var ts = new TimeSpan(DateTime.UtcNow.Ticks - dt.Ticks); double delta = Math.Abs(ts.TotalSeconds); if (delta < 60) { return ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago"; } if (delta < 60 * 2) { return "a minute ago"; } if (delta < 45 * 60) { return ts.Minutes + " minutes ago"; } if (delta < 90 * 60) { return "an hour ago"; } if (delta < 24 * 60 * 60) { return ts.Hours + " hours ago"; } if (delta < 48 * 60 * 60) { return "yesterday"; } if (delta < 30 * 24 * 60 * 60) { return ts.Days + " days ago"; } if (delta < 12 * 30 * 24 * 60 * 60) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months <= 1 ? "one month ago" : months + " months ago"; } int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years <= 1 ? "one year ago" : years + " years ago"; Suggestions? Comments? Ways to improve this algorithm? A: @jeff IMHO yours seems a little long. However it does seem a little more robust with support for "yesterday" and "years". But in my experience when this is used, the person is most likely to view the content in the first 30 days. It is only the really hardcore people that come after that. So, I usually elect to keep this short and simple. This is the method I am currently using in one of my websites. This returns only a relative day, hour and time. And then the user has to slap on "ago" in the output. public static string ToLongString(this TimeSpan time) { string output = String.Empty; if (time.Days > 0) output += time.Days + " days "; if ((time.Days == 0 || time.Days == 1) && time.Hours > 0) output += time.Hours + " hr "; if (time.Days == 0 && time.Minutes > 0) output += time.Minutes + " min "; if (output.Length == 0) output += time.Seconds + " sec"; return output.Trim(); } A: Surely an easy fix to get rid of the '1 hours ago' problem would be to increase the window that 'an hour ago' is valid for. Change if (delta < 5400) // 90 * 60 { return "an hour ago"; } into if (delta < 7200) // 120 * 60 { return "an hour ago"; } This means that something that occurred 110 minutes ago will read as 'an hour ago' - this may not be perfect, but I'd say it is better than the current situation of '1 hours ago'. A: public string getRelativeDateTime(DateTime date) { TimeSpan ts = DateTime.Now - date; if (ts.TotalMinutes < 1)//seconds ago return "just now"; if (ts.TotalHours < 1)//min ago return (int)ts.TotalMinutes == 1 ? "1 Minute ago" : (int)ts.TotalMinutes + " Minutes ago"; if (ts.TotalDays < 1)//hours ago return (int)ts.TotalHours == 1 ? "1 Hour ago" : (int)ts.TotalHours + " Hours ago"; if (ts.TotalDays < 7)//days ago return (int)ts.TotalDays == 1 ? "1 Day ago" : (int)ts.TotalDays + " Days ago"; if (ts.TotalDays < 30.4368)//weeks ago return (int)(ts.TotalDays / 7) == 1 ? "1 Week ago" : (int)(ts.TotalDays / 7) + " Weeks ago"; if (ts.TotalDays < 365.242)//months ago return (int)(ts.TotalDays / 30.4368) == 1 ? "1 Month ago" : (int)(ts.TotalDays / 30.4368) + " Months ago"; //years ago return (int)(ts.TotalDays / 365.242) == 1 ? "1 Year ago" : (int)(ts.TotalDays / 365.242) + " Years ago"; } Conversion values for days in a month and year were taken from Google. A: In a way you do your DateTime function over calculating relative time by either seconds to years, try something like this: using System; public class Program { public static string getRelativeTime(DateTime past) { DateTime now = DateTime.Today; string rt = ""; int time; string statement = ""; if (past.Second >= now.Second) { if (past.Second - now.Second == 1) { rt = "second ago"; } rt = "seconds ago"; time = past.Second - now.Second; statement = "" + time; return (statement + rt); } if (past.Minute >= now.Minute) { if (past.Second - now.Second == 1) { rt = "second ago"; } else { rt = "minutes ago"; } time = past.Minute - now.Minute; statement = "" + time; return (statement + rt); } // This process will go on until years } public static void Main() { DateTime before = new DateTime(1995, 8, 24); string date = getRelativeTime(before); Console.WriteLine("Windows 95 was {0}.", date); } } Not exactly working but if you modify and debug it a bit, it will likely do the job. A: A couple of years late to the party, but I had a requirement to do this for both past and future dates, so I combined Jeff's and Vincent's into this. It's a ternarytastic extravaganza! :) public static class DateTimeHelper { private const int SECOND = 1; private const int MINUTE = 60 * SECOND; private const int HOUR = 60 * MINUTE; private const int DAY = 24 * HOUR; private const int MONTH = 30 * DAY; /// <summary> /// Returns a friendly version of the provided DateTime, relative to now. E.g.: "2 days ago", or "in 6 months". /// </summary> /// <param name="dateTime">The DateTime to compare to Now</param> /// <returns>A friendly string</returns> public static string GetFriendlyRelativeTime(DateTime dateTime) { if (DateTime.UtcNow.Ticks == dateTime.Ticks) { return "Right now!"; } bool isFuture = (DateTime.UtcNow.Ticks < dateTime.Ticks); var ts = DateTime.UtcNow.Ticks < dateTime.Ticks ? new TimeSpan(dateTime.Ticks - DateTime.UtcNow.Ticks) : new TimeSpan(DateTime.UtcNow.Ticks - dateTime.Ticks); double delta = ts.TotalSeconds; if (delta < 1 * MINUTE) { return isFuture ? "in " + (ts.Seconds == 1 ? "one second" : ts.Seconds + " seconds") : ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago"; } if (delta < 2 * MINUTE) { return isFuture ? "in a minute" : "a minute ago"; } if (delta < 45 * MINUTE) { return isFuture ? "in " + ts.Minutes + " minutes" : ts.Minutes + " minutes ago"; } if (delta < 90 * MINUTE) { return isFuture ? "in an hour" : "an hour ago"; } if (delta < 24 * HOUR) { return isFuture ? "in " + ts.Hours + " hours" : ts.Hours + " hours ago"; } if (delta < 48 * HOUR) { return isFuture ? "tomorrow" : "yesterday"; } if (delta < 30 * DAY) { return isFuture ? "in " + ts.Days + " days" : ts.Days + " days ago"; } if (delta < 12 * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return isFuture ? "in " + (months <= 1 ? "one month" : months + " months") : months <= 1 ? "one month ago" : months + " months ago"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return isFuture ? "in " + (years <= 1 ? "one year" : years + " years") : years <= 1 ? "one year ago" : years + " years ago"; } } } A: using Fluent DateTime var dateTime1 = 2.Hours().Ago(); var dateTime2 = 3.Days().Ago(); var dateTime3 = 1.Months().Ago(); var dateTime4 = 5.Hours().FromNow(); var dateTime5 = 2.Weeks().FromNow(); var dateTime6 = 40.Seconds().FromNow(); A: Given the world and her husband appear to be posting code samples, here is what I wrote a while ago, based on a couple of these answers. I had a specific need for this code to be localisable. So I have two classes — Grammar, which specifies the localisable terms, and FuzzyDateExtensions, which holds a bunch of extension methods. I had no need to deal with future datetimes, so no attempt is made to handle them with this code. I've left some of the XMLdoc in the source, but removed most (where they'd be obvious) for brevity's sake. I've also not included every class member here: public class Grammar { /// <summary> Gets or sets the term for "just now". </summary> public string JustNow { get; set; } /// <summary> Gets or sets the term for "X minutes ago". </summary> /// <remarks> /// This is a <see cref="String.Format"/> pattern, where <c>{0}</c> /// is the number of minutes. /// </remarks> public string MinutesAgo { get; set; } public string OneHourAgo { get; set; } public string HoursAgo { get; set; } public string Yesterday { get; set; } public string DaysAgo { get; set; } public string LastMonth { get; set; } public string MonthsAgo { get; set; } public string LastYear { get; set; } public string YearsAgo { get; set; } /// <summary> Gets or sets the term for "ages ago". </summary> public string AgesAgo { get; set; } /// <summary> /// Gets or sets the threshold beyond which the fuzzy date should be /// considered "ages ago". /// </summary> public TimeSpan AgesAgoThreshold { get; set; } /// <summary> /// Initialises a new <see cref="Grammar"/> instance with the /// specified properties. /// </summary> private void Initialise(string justNow, string minutesAgo, string oneHourAgo, string hoursAgo, string yesterday, string daysAgo, string lastMonth, string monthsAgo, string lastYear, string yearsAgo, string agesAgo, TimeSpan agesAgoThreshold) { ... } } The FuzzyDateString class contains: public static class FuzzyDateExtensions { public static string ToFuzzyDateString(this TimeSpan timespan) { return timespan.ToFuzzyDateString(new Grammar()); } public static string ToFuzzyDateString(this TimeSpan timespan, Grammar grammar) { return GetFuzzyDateString(timespan, grammar); } public static string ToFuzzyDateString(this DateTime datetime) { return (DateTime.Now - datetime).ToFuzzyDateString(); } public static string ToFuzzyDateString(this DateTime datetime, Grammar grammar) { return (DateTime.Now - datetime).ToFuzzyDateString(grammar); } private static string GetFuzzyDateString(TimeSpan timespan, Grammar grammar) { timespan = timespan.Duration(); if (timespan >= grammar.AgesAgoThreshold) { return grammar.AgesAgo; } if (timespan < new TimeSpan(0, 2, 0)) // 2 minutes { return grammar.JustNow; } if (timespan < new TimeSpan(1, 0, 0)) // 1 hour { return String.Format(grammar.MinutesAgo, timespan.Minutes); } if (timespan < new TimeSpan(1, 55, 0)) // 1 hour 55 minutes { return grammar.OneHourAgo; } if (timespan < new TimeSpan(12, 0, 0) // 12 hours && (DateTime.Now - timespan).IsToday()) { return String.Format(grammar.HoursAgo, timespan.RoundedHours()); } if ((DateTime.Now.AddDays(1) - timespan).IsToday()) { return grammar.Yesterday; } if (timespan < new TimeSpan(32, 0, 0, 0) // 32 days && (DateTime.Now - timespan).IsThisMonth()) { return String.Format(grammar.DaysAgo, timespan.RoundedDays()); } if ((DateTime.Now.AddMonths(1) - timespan).IsThisMonth()) { return grammar.LastMonth; } if (timespan < new TimeSpan(365, 0, 0, 0, 0) // 365 days && (DateTime.Now - timespan).IsThisYear()) { return String.Format(grammar.MonthsAgo, timespan.RoundedMonths()); } if ((DateTime.Now - timespan).AddYears(1).IsThisYear()) { return grammar.LastYear; } return String.Format(grammar.YearsAgo, timespan.RoundedYears()); } } One of the key things I wanted to achieve, as well as localisation, was that "today" would only mean "this calendar day", so the IsToday, IsThisMonth, IsThisYear methods look like this: public static bool IsToday(this DateTime date) { return date.DayOfYear == DateTime.Now.DayOfYear && date.IsThisYear(); } and the rounding methods are like this (I've included RoundedMonths, as that's a bit different): public static int RoundedDays(this TimeSpan timespan) { return (timespan.Hours > 12) ? timespan.Days + 1 : timespan.Days; } public static int RoundedMonths(this TimeSpan timespan) { DateTime then = DateTime.Now - timespan; // Number of partial months elapsed since 1 Jan, AD 1 (DateTime.MinValue) int nowMonthYears = DateTime.Now.Year * 12 + DateTime.Now.Month; int thenMonthYears = then.Year * 12 + then.Month; return nowMonthYears - thenMonthYears; } I hope people find this useful and/or interesting :o) A: // Calculate total days in current year int daysInYear; for (var i = 1; i <= 12; i++) daysInYear += DateTime.DaysInMonth(DateTime.Now.Year, i); // Past date DateTime dateToCompare = DateTime.Now.Subtract(TimeSpan.FromMinutes(582)); // Calculate difference between current date and past date double diff = (DateTime.Now - dateToCompare).TotalMilliseconds; TimeSpan ts = TimeSpan.FromMilliseconds(diff); var years = ts.TotalDays / daysInYear; // Years var months = ts.TotalDays / (daysInYear / (double)12); // Months var weeks = ts.TotalDays / 7; // Weeks var days = ts.TotalDays; // Days var hours = ts.TotalHours; // Hours var minutes = ts.TotalMinutes; // Minutes var seconds = ts.TotalSeconds; // Seconds if (years >= 1) Console.WriteLine(Math.Round(years, 0) + " year(s) ago"); else if (months >= 1) Console.WriteLine(Math.Round(months, 0) + " month(s) ago"); else if (weeks >= 1) Console.WriteLine(Math.Round(weeks, 0) + " week(s) ago"); else if (days >= 1) Console.WriteLine(Math.Round(days, 0) + " days(s) ago"); else if (hours >= 1) Console.WriteLine(Math.Round(hours, 0) + " hour(s) ago"); else if (minutes >= 1) Console.WriteLine(Math.Round(minutes, 0) + " minute(s) ago"); else if (seconds >= 1) Console.WriteLine(Math.Round(seconds, 0) + " second(s) ago"); Console.ReadLine(); A: A "one-liner" using deconstruction and Linq to get "n [biggest unit of time] ago" : TimeSpan timeSpan = DateTime.Now - new DateTime(1234, 5, 6, 7, 8, 9); (string unit, int value) = new Dictionary<string, int> { {"year(s)", (int)(timeSpan.TotalDays / 365.25)}, //https://en.wikipedia.org/wiki/Year#Intercalation {"month(s)", (int)(timeSpan.TotalDays / 29.53)}, //https://en.wikipedia.org/wiki/Month {"day(s)", (int)timeSpan.TotalDays}, {"hour(s)", (int)timeSpan.TotalHours}, {"minute(s)", (int)timeSpan.TotalMinutes}, {"second(s)", (int)timeSpan.TotalSeconds}, {"millisecond(s)", (int)timeSpan.TotalMilliseconds} }.First(kvp => kvp.Value > 0); Console.WriteLine($"{value} {unit} ago"); You get 786 year(s) ago With the current year and month, like TimeSpan timeSpan = DateTime.Now - new DateTime(2020, 12, 6, 7, 8, 9); you get 4 day(s) ago With the actual date, like TimeSpan timeSpan = DateTime.Now - DateTime.Now.Date; you get 9 hour(s) ago A: Is there an easy way to do this in Java? The java.util.Date class seems rather limited. Here is my quick and dirty Java solution: import java.util.Date; import javax.management.timer.Timer; String getRelativeDate(Date date) { long delta = new Date().getTime() - date.getTime(); if (delta < 1L * Timer.ONE_MINUTE) { return toSeconds(delta) == 1 ? "one second ago" : toSeconds(delta) + " seconds ago"; } if (delta < 2L * Timer.ONE_MINUTE) { return "a minute ago"; } if (delta < 45L * Timer.ONE_MINUTE) { return toMinutes(delta) + " minutes ago"; } if (delta < 90L * Timer.ONE_MINUTE) { return "an hour ago"; } if (delta < 24L * Timer.ONE_HOUR) { return toHours(delta) + " hours ago"; } if (delta < 48L * Timer.ONE_HOUR) { return "yesterday"; } if (delta < 30L * Timer.ONE_DAY) { return toDays(delta) + " days ago"; } if (delta < 12L * 4L * Timer.ONE_WEEK) { // a month long months = toMonths(delta); return months <= 1 ? "one month ago" : months + " months ago"; } else { long years = toYears(delta); return years <= 1 ? "one year ago" : years + " years ago"; } } private long toSeconds(long date) { return date / 1000L; } private long toMinutes(long date) { return toSeconds(date) / 60L; } private long toHours(long date) { return toMinutes(date) / 60L; } private long toDays(long date) { return toHours(date) / 24L; } private long toMonths(long date) { return toDays(date) / 30L; } private long toYears(long date) { return toMonths(date) / 365L; } A: iPhone Objective-C Version + (NSString *)timeAgoString:(NSDate *)date { int delta = -(int)[date timeIntervalSinceNow]; if (delta < 60) { return delta == 1 ? @"one second ago" : [NSString stringWithFormat:@"%i seconds ago", delta]; } if (delta < 120) { return @"a minute ago"; } if (delta < 2700) { return [NSString stringWithFormat:@"%i minutes ago", delta/60]; } if (delta < 5400) { return @"an hour ago"; } if (delta < 24 * 3600) { return [NSString stringWithFormat:@"%i hours ago", delta/3600]; } if (delta < 48 * 3600) { return @"yesterday"; } if (delta < 30 * 24 * 3600) { return [NSString stringWithFormat:@"%i days ago", delta/(24*3600)]; } if (delta < 12 * 30 * 24 * 3600) { int months = delta/(30*24*3600); return months <= 1 ? @"one month ago" : [NSString stringWithFormat:@"%i months ago", months]; } else { int years = delta/(12*30*24*3600); return years <= 1 ? @"one year ago" : [NSString stringWithFormat:@"%i years ago", years]; } } A: I thought I'd give this a shot using classes and polymorphism. I had a previous iteration which used sub-classing which ended up having way too much overhead. I've switched to a more flexible delegate / public property object model which is significantly better. My code is very slightly more accurate, I wish I could come up with a better way to generate "months ago" that didn't seem too over-engineered. I think I'd still stick with Jeff's if-then cascade because it's less code and it's simpler (it's definitely easier to ensure it'll work as expected). For the below code PrintRelativeTime.GetRelativeTimeMessage(TimeSpan ago) returns the relative time message (e.g. "yesterday"). public class RelativeTimeRange : IComparable { public TimeSpan UpperBound { get; set; } public delegate string RelativeTimeTextDelegate(TimeSpan timeDelta); public RelativeTimeTextDelegate MessageCreator { get; set; } public int CompareTo(object obj) { if (!(obj is RelativeTimeRange)) { return 1; } // note that this sorts in reverse order to the way you'd expect, // this saves having to reverse a list later return (obj as RelativeTimeRange).UpperBound.CompareTo(UpperBound); } } public class PrintRelativeTime { private static List<RelativeTimeRange> timeRanges; static PrintRelativeTime() { timeRanges = new List<RelativeTimeRange>{ new RelativeTimeRange { UpperBound = TimeSpan.FromSeconds(1), MessageCreator = (delta) => { return "one second ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.FromSeconds(60), MessageCreator = (delta) => { return delta.Seconds + " seconds ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.FromMinutes(2), MessageCreator = (delta) => { return "one minute ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.FromMinutes(60), MessageCreator = (delta) => { return delta.Minutes + " minutes ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.FromHours(2), MessageCreator = (delta) => { return "one hour ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.FromHours(24), MessageCreator = (delta) => { return delta.Hours + " hours ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.FromDays(2), MessageCreator = (delta) => { return "yesterday"; } }, new RelativeTimeRange { UpperBound = DateTime.Now.Subtract(DateTime.Now.AddMonths(-1)), MessageCreator = (delta) => { return delta.Days + " days ago"; } }, new RelativeTimeRange { UpperBound = DateTime.Now.Subtract(DateTime.Now.AddMonths(-2)), MessageCreator = (delta) => { return "one month ago"; } }, new RelativeTimeRange { UpperBound = DateTime.Now.Subtract(DateTime.Now.AddYears(-1)), MessageCreator = (delta) => { return (int)Math.Floor(delta.TotalDays / 30) + " months ago"; } }, new RelativeTimeRange { UpperBound = DateTime.Now.Subtract(DateTime.Now.AddYears(-2)), MessageCreator = (delta) => { return "one year ago"; } }, new RelativeTimeRange { UpperBound = TimeSpan.MaxValue, MessageCreator = (delta) => { return (int)Math.Floor(delta.TotalDays / 365.24D) + " years ago"; } } }; timeRanges.Sort(); } public static string GetRelativeTimeMessage(TimeSpan ago) { RelativeTimeRange postRelativeDateRange = timeRanges[0]; foreach (var timeRange in timeRanges) { if (ago.CompareTo(timeRange.UpperBound) <= 0) { postRelativeDateRange = timeRange; } } return postRelativeDateRange.MessageCreator(ago); } } A: When you know the viewer's time zone, it might be clearer to use calendar days at the day scale. I'm not familiar with the .NET libraries so I don't know how you'd do that in C#, unfortunately. On consumer sites, you could also be hand-wavier under a minute. "Less than a minute ago" or "just now" could be good enough. A: In PHP, I do it this way: <?php function timesince($original) { // array of time period chunks $chunks = array( array(60 * 60 * 24 * 365 , 'year'), array(60 * 60 * 24 * 30 , 'month'), array(60 * 60 * 24 * 7, 'week'), array(60 * 60 * 24 , 'day'), array(60 * 60 , 'hour'), array(60 , 'minute'), ); $today = time(); /* Current unix time */ $since = $today - $original; if($since > 604800) { $print = date("M jS", $original); if($since > 31536000) { $print .= ", " . date("Y", $original); } return $print; } // $j saves performing the count function each time around the loop for ($i = 0, $j = count($chunks); $i < $j; $i++) { $seconds = $chunks[$i][0]; $name = $chunks[$i][1]; // finding the biggest chunk (if the chunk fits, break) if (($count = floor($since / $seconds)) != 0) { break; } } $print = ($count == 1) ? '1 '.$name : "$count {$name}s"; return $print . " ago"; } ?> A: using System; using System.Collections.Generic; using System.Linq; public static class RelativeDateHelper { private static Dictionary<double, Func<double, string>> sm_Dict = null; private static Dictionary<double, Func<double, string>> DictionarySetup() { var dict = new Dictionary<double, Func<double, string>>(); dict.Add(0.75, (mins) => "less than a minute"); dict.Add(1.5, (mins) => "about a minute"); dict.Add(45, (mins) => string.Format("{0} minutes", Math.Round(mins))); dict.Add(90, (mins) => "about an hour"); dict.Add(1440, (mins) => string.Format("about {0} hours", Math.Round(Math.Abs(mins / 60)))); // 60 * 24 dict.Add(2880, (mins) => "a day"); // 60 * 48 dict.Add(43200, (mins) => string.Format("{0} days", Math.Floor(Math.Abs(mins / 1440)))); // 60 * 24 * 30 dict.Add(86400, (mins) => "about a month"); // 60 * 24 * 60 dict.Add(525600, (mins) => string.Format("{0} months", Math.Floor(Math.Abs(mins / 43200)))); // 60 * 24 * 365 dict.Add(1051200, (mins) => "about a year"); // 60 * 24 * 365 * 2 dict.Add(double.MaxValue, (mins) => string.Format("{0} years", Math.Floor(Math.Abs(mins / 525600)))); return dict; } public static string ToRelativeDate(this DateTime input) { TimeSpan oSpan = DateTime.Now.Subtract(input); double TotalMinutes = oSpan.TotalMinutes; string Suffix = " ago"; if (TotalMinutes < 0.0) { TotalMinutes = Math.Abs(TotalMinutes); Suffix = " from now"; } if (null == sm_Dict) sm_Dict = DictionarySetup(); return sm_Dict.First(n => TotalMinutes < n.Key).Value.Invoke(TotalMinutes) + Suffix; } } The same as another answer to this question but as an extension method with a static dictionary. A: @Jeff var ts = new TimeSpan(DateTime.UtcNow.Ticks - dt.Ticks); Doing a subtraction on DateTime returns a TimeSpan anyway. So you can just do (DateTime.UtcNow - dt).TotalSeconds I'm also surprised to see the constants multiplied-out by hand and then comments added with the multiplications in. Was that some misguided optimisation? A: you can try this.I think it will work correctly. long delta = new Date().getTime() - date.getTime(); const int SECOND = 1; const int MINUTE = 60 * SECOND; const int HOUR = 60 * MINUTE; const int DAY = 24 * HOUR; const int MONTH = 30 * DAY; if (delta < 0L) { return "not yet"; } if (delta < 1L * MINUTE) { return ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago"; } if (delta < 2L * MINUTE) { return "a minute ago"; } if (delta < 45L * MINUTE) { return ts.Minutes + " minutes ago"; } if (delta < 90L * MINUTE) { return "an hour ago"; } if (delta < 24L * HOUR) { return ts.Hours + " hours ago"; } if (delta < 48L * HOUR) { return "yesterday"; } if (delta < 30L * DAY) { return ts.Days + " days ago"; } if (delta < 12L * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months <= 1 ? "one month ago" : months + " months ago"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years <= 1 ? "one year ago" : years + " years ago"; } A: You can use TimeAgo extension as below: public static string TimeAgo(this DateTime dateTime) { string result = string.Empty; var timeSpan = DateTime.Now.Subtract(dateTime); if (timeSpan <= TimeSpan.FromSeconds(60)) { result = string.Format("{0} seconds ago", timeSpan.Seconds); } else if (timeSpan <= TimeSpan.FromMinutes(60)) { result = timeSpan.Minutes > 1 ? String.Format("about {0} minutes ago", timeSpan.Minutes) : "about a minute ago"; } else if (timeSpan <= TimeSpan.FromHours(24)) { result = timeSpan.Hours > 1 ? String.Format("about {0} hours ago", timeSpan.Hours) : "about an hour ago"; } else if (timeSpan <= TimeSpan.FromDays(30)) { result = timeSpan.Days > 1 ? String.Format("about {0} days ago", timeSpan.Days) : "yesterday"; } else if (timeSpan <= TimeSpan.FromDays(365)) { result = timeSpan.Days > 30 ? String.Format("about {0} months ago", timeSpan.Days / 30) : "about a month ago"; } else { result = timeSpan.Days > 365 ? String.Format("about {0} years ago", timeSpan.Days / 365) : "about a year ago"; } return result; } Or use jQuery plugin with Razor extension from Timeago. A: You can reduce the server-side load by performing this logic client-side. View source on some Digg pages for reference. They have the server emit an epoch time value that gets processed by Javascript. This way you don't need to manage the end user's time zone. The new server-side code would be something like: public string GetRelativeTime(DateTime timeStamp) { return string.Format("<script>printdate({0});</script>", timeStamp.ToFileTimeUtc()); } You could even add a NOSCRIPT block there and just perform a ToString(). A: Jeff, your code is nice but could be clearer with constants (as suggested in Code Complete). const int SECOND = 1; const int MINUTE = 60 * SECOND; const int HOUR = 60 * MINUTE; const int DAY = 24 * HOUR; const int MONTH = 30 * DAY; var ts = new TimeSpan(DateTime.UtcNow.Ticks - yourDate.Ticks); double delta = Math.Abs(ts.TotalSeconds); if (delta < 1 * MINUTE) return ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago"; if (delta < 2 * MINUTE) return "a minute ago"; if (delta < 45 * MINUTE) return ts.Minutes + " minutes ago"; if (delta < 90 * MINUTE) return "an hour ago"; if (delta < 24 * HOUR) return ts.Hours + " hours ago"; if (delta < 48 * HOUR) return "yesterday"; if (delta < 30 * DAY) return ts.Days + " days ago"; if (delta < 12 * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months <= 1 ? "one month ago" : months + " months ago"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years <= 1 ? "one year ago" : years + " years ago"; } A: Here's the algorithm stackoverflow uses but rewritten more concisely in perlish pseudocode with a bug fix (no "one hours ago"). The function takes a (positive) number of seconds ago and returns a human-friendly string like "3 hours ago" or "yesterday". agoify($delta) local($y, $mo, $d, $h, $m, $s); $s = floor($delta); if($s<=1) return "a second ago"; if($s<60) return "$s seconds ago"; $m = floor($s/60); if($m==1) return "a minute ago"; if($m<45) return "$m minutes ago"; $h = floor($m/60); if($h==1) return "an hour ago"; if($h<24) return "$h hours ago"; $d = floor($h/24); if($d<2) return "yesterday"; if($d<30) return "$d days ago"; $mo = floor($d/30); if($mo<=1) return "a month ago"; $y = floor($mo/12); if($y<1) return "$mo months ago"; if($y==1) return "a year ago"; return "$y years ago"; A: This is my function, works like a charm :) public static string RelativeDate(DateTime theDate) { var span = DateTime.Now - theDate; if (span.Days > 365) { var years = (span.Days / 365); if (span.Days % 365 != 0) years += 1; return $"about {years} {(years == 1 ? "year" : "years")} ago"; } if (span.Days > 30) { var months = (span.Days / 30); if (span.Days % 31 != 0) months += 1; return $"about {months} {(months == 1 ? "month" : "months")} ago"; } if (span.Days > 0) return $"about {span.Days} {(span.Days == 1 ? "day" : "days")} ago"; if (span.Hours > 0) return $"about {span.Hours} {(span.Hours == 1 ? "hour" : "hours")} ago"; if (span.Minutes > 0) return $"about {span.Minutes} {(span.Minutes == 1 ? "minute" : "minutes")} ago"; if (span.Seconds > 5) return $"about {span.Seconds} seconds ago"; return span.Seconds <= 5 ? "about 5 seconds ago" : string.Empty; } A: The accepted answer by Vincent makes quite a few arbitrary decisions. Why is 45 minutes rounded up to an hour while 45 seconds is not rounded up to a minute? It has an increased level of cyclomatic complexity within the years and month calculations that makes it more complex to follow the logic. It makes the assumption that the TimeSpan is relative to the past (2 days ago) when it could very well be in the future (2 days until). It defines unnecessary constants instead of using TimeSpan.TicksPerSecond etc. This implementation resolves the above and updates the syntax to use switch expressions and relational patterns /// <summary> /// Convert a <see cref="TimeSpan"/> to a natural language representation. /// </summary> /// <example> /// <code> /// TimeSpan.FromSeconds(10).ToNaturalLanguage(); /// // 10 seconds /// </code> /// </example> public static string ToNaturalLanguage(this TimeSpan @this) { const int daysInWeek = 7; const int daysInMonth = 30; const int daysInYear = 365; const long threshold = 100 * TimeSpan.TicksPerMillisecond; @this = @this.TotalSeconds < 0 ? TimeSpan.FromSeconds(@this.TotalSeconds * -1) : @this; return (@this.Ticks + threshold) switch { < 2 * TimeSpan.TicksPerSecond => "a second", < 1 * TimeSpan.TicksPerMinute => @this.Seconds + " seconds", < 2 * TimeSpan.TicksPerMinute => "a minute", < 1 * TimeSpan.TicksPerHour => @this.Minutes + " minutes", < 2 * TimeSpan.TicksPerHour => "an hour", < 1 * TimeSpan.TicksPerDay => @this.Hours + " hours", < 2 * TimeSpan.TicksPerDay => "a day", < 1 * daysInWeek * TimeSpan.TicksPerDay => @this.Days + " days", < 2 * daysInWeek * TimeSpan.TicksPerDay => "a week", < 1 * daysInMonth * TimeSpan.TicksPerDay => (@this.Days / daysInWeek).ToString("F0") + " weeks", < 2 * daysInMonth * TimeSpan.TicksPerDay => "a month", < 1 * daysInYear * TimeSpan.TicksPerDay => (@this.Days / daysInMonth).ToString("F0") + " months", < 2 * daysInYear * TimeSpan.TicksPerDay => "a year", _ => (@this.Days / daysInYear).ToString("F0") + " years" }; } /// <summary> /// Convert a <see cref="DateTime"/> to a natural language representation. /// </summary> /// <example> /// <code> /// (DateTime.Now - TimeSpan.FromSeconds(10)).ToNaturalLanguage() /// // 10 seconds ago /// </code> /// </example> public static string ToNaturalLanguage(this DateTime @this) { TimeSpan timeSpan = @this - DateTime.Now; return timeSpan.TotalSeconds switch { >= 1 => timeSpan.ToNaturalLanguage() + " until", <= -1 => timeSpan.ToNaturalLanguage() + " ago", _ => "now", }; } You can test it with NUnit as follows: [TestCase("a second", 0)] [TestCase("a second", 1)] [TestCase("2 seconds", 2)] [TestCase("a minute", 0, 1)] [TestCase("5 minutes", 0, 5)] [TestCase("an hour", 0, 0, 1)] [TestCase("2 hours", 0, 0, 2)] [TestCase("a day", 0, 0, 24)] [TestCase("a day", 0, 0, 0, 1)] [TestCase("6 days", 0, 0, 0, 6)] [TestCase("a week", 0, 0, 0, 7)] [TestCase("4 weeks", 0, 0, 0, 29)] [TestCase("a month", 0, 0, 0, 30)] [TestCase("6 months", 0, 0, 0, 6 * 30)] [TestCase("a year", 0, 0, 0, 365)] [TestCase("68 years", int.MaxValue)] public void NaturalLanguageHelpers_TimeSpan( string expected, int seconds, int minutes = 0, int hours = 0, int days = 0 ) { // Arrange TimeSpan timeSpan = new(days, hours, minutes, seconds); // Act string result = timeSpan.ToNaturalLanguage(); // Assert Assert.That(result, Is.EqualTo(expected)); } [TestCase("now", 0)] [TestCase("10 minutes ago", 0, -10)] [TestCase("10 minutes until", 10, 10)] [TestCase("68 years until", int.MaxValue)] [TestCase("68 years ago", int.MinValue)] public void NaturalLanguageHelpers_DateTime( string expected, int seconds, int minutes = 0, int hours = 0, int days = 0 ) { // Arrange TimeSpan timeSpan = new(days, hours, minutes, seconds); DateTime now = DateTime.Now; DateTime dateTime = now + timeSpan; // Act string result = dateTime.ToNaturalLanguage(); // Assert Assert.That(result, Is.EqualTo(expected)); } Or as a gist: https://gist.github.com/StudioLE/2dd394e3f792e79adc927ede274df56e A: My way is much more simpler. You can tweak with the return strings as you want public static string TimeLeft(DateTime utcDate) { TimeSpan timeLeft = DateTime.UtcNow - utcDate; string timeLeftString = ""; if (timeLeft.Days > 0) { timeLeftString += timeLeft.Days == 1 ? timeLeft.Days + " day" : timeLeft.Days + " days"; } else if (timeLeft.Hours > 0) { timeLeftString += timeLeft.Hours == 1 ? timeLeft.Hours + " hour" : timeLeft.Hours + " hours"; } else { timeLeftString += timeLeft.Minutes == 1 ? timeLeft.Minutes+" minute" : timeLeft.Minutes + " minutes"; } return timeLeftString; } A: Simple and 100% working solution. Handling ago and future times as well.. just in case public string GetTimeSince(DateTime postDate) { string message = ""; DateTime currentDate = DateTime.Now; TimeSpan timegap = currentDate - postDate; if (timegap.Days > 365) { message = string.Format(L("Ago") + " {0} " + L("Years"), (((timegap.Days) / 30) / 12)); } else if (timegap.Days > 30) { message = string.Format(L("Ago") + " {0} " + L("Months"), timegap.Days/30); } else if (timegap.Days > 0) { message = string.Format(L("Ago") + " {0} " + L("Days"), timegap.Days); } else if (timegap.Hours > 0) { message = string.Format(L("Ago") + " {0} " + L("Hours"), timegap.Hours); } else if (timegap.Minutes > 0) { message = string.Format(L("Ago") + " {0} " + L("Minutes"), timegap.Minutes); } else if (timegap.Seconds > 0) { message = string.Format(L("Ago") + " {0} " + L("Seconds"), timegap.Seconds); } // let's handle future times..just in case else if (timegap.Days < -365) { message = string.Format(L("In") + " {0} " + L("Years"), (((Math.Abs(timegap.Days)) / 30) / 12)); } else if (timegap.Days < -30) { message = string.Format(L("In") + " {0} " + L("Months"), ((Math.Abs(timegap.Days)) / 30)); } else if (timegap.Days < 0) { message = string.Format(L("In") + " {0} " + L("Days"), Math.Abs(timegap.Days)); } else if (timegap.Hours < 0) { message = string.Format(L("In") + " {0} " + L("Hours"), Math.Abs(timegap.Hours)); } else if (timegap.Minutes < 0) { message = string.Format(L("In") + " {0} " + L("Minutes"), Math.Abs(timegap.Minutes)); } else if (timegap.Seconds < 0) { message = string.Format(L("In") + " {0} " + L("Seconds"), Math.Abs(timegap.Seconds)); } else { message = "a bit"; } return message; }
{ "language": "en", "url": "https://stackoverflow.com/questions/11", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1644" }
Q: Determine a user's timezone Is there a standard way for a web server to be able to determine a user's timezone within a web page? Perhaps from an HTTP header or part of the user-agent string? A: One possible option is to use the Date header field, which is defined in RFC 7231 and is supposed to include the timezone. Of course, it is not guaranteed that the value is really the client's timezone, but it can be a convenient starting point. A: JavaScript is the easiest way to get the client's local time. I would suggest using an XMLHttpRequest to send back the local time, and if that fails, fall back to the timezone detected based on their IP address. As far as geolocation, I've used MaxMind GeoIP on several projects and it works well, though I'm not sure if they provide timezone data. It's a service you pay for and they provide monthly updates to your database. They provide wrappers in several web languages. A: Here is a robust JavaScript solution to determine the time zone the browser is in. >>> var timezone = jstz.determine(); >>> timezone.name(); "Europe/London" https://github.com/pellepim/jstimezonedetect A: Here is a more complete way. * *Get the timezone offset for the user *Test some days on daylight saving boundaries to determine if they are in a zone that uses daylight saving. An excerpt is below: function TimezoneDetect(){ var dtDate = new Date('1/1/' + (new Date()).getUTCFullYear()); var intOffset = 10000; //set initial offset high so it is adjusted on the first attempt var intMonth; var intHoursUtc; var intHours; var intDaysMultiplyBy; // Go through each month to find the lowest offset to account for DST for (intMonth=0;intMonth < 12;intMonth++){ //go to the next month dtDate.setUTCMonth(dtDate.getUTCMonth() + 1); // To ignore daylight saving time look for the lowest offset. // Since, during DST, the clock moves forward, it'll be a bigger number. if (intOffset > (dtDate.getTimezoneOffset() * (-1))){ intOffset = (dtDate.getTimezoneOffset() * (-1)); } } return intOffset; } Getting TZ and DST from JS (via Way Back Machine) A: There can be a few ways to determine the timezone in the browser. If there is a standard function that is available and supported by your browser, that is what you should use. Below are three ways to get the same information in different formats. Avoid using non-standard solutions that make any guesses based on certain assumptions or hard coded lists of zones though they may be helpful if nothing else can be done. Once you have this info, you can pass this as a non-standard request header to server and use it there. If you also need the timezone offset, you can also pass it to server in headers or in request payload which can be retrieved with dateObj.getTimezoneOffset(). * *Use Intl API to get the Olson format (Standard and recommended way): Note that this is not supported by all browsers. Refer this link for details on browser support for this. This API let's you get the timezone in Olson format i.e., something like Asia/Kolkata, America/New_York etc. Intl.DateTimeFormat().resolvedOptions().timeZone *Use Date object to get the long format such as India Standard Time, Eastern Standard Time etc: This is supported by all browsers. let dateObj = new Date(2021, 11, 25, 09, 30, 00); //then dateObj.toString() //yields Sat Dec 25 2021 09:30:00 GMT+0530 (India Standard Time) //I am located in India (IST) Notice the string contains timezone info in long and short formats. You can now use regex to get this info out: let longZoneRegex = /\((.+)\)/; dateObj.toString().match(longZoneRegex); //yields ['(India Standard Time)', 'India Standard Time', index: 34, input: 'Sat Dec 25 2021 09:30:00 GMT+0530 (India Standard Time)', groups: undefined] //Note that output is an array so use output[1] to get the timezone name. *Use Date object to get the short format such as GMT+0530, GMT-0500 etc: This is supported by all browsers. Similarly, you can get the short format out too: let shortZoneRegex = /GMT[+-]\d{1,4}/; dateObj.toString().match(shortZoneRegex); //yields ['GMT+0530', index: 25, input: 'Sat Dec 25 2021 09:30:00 GMT+0530 (India Standard Time)', groups: undefined] //Note that output is an array so use output[0] to get the timezone name. A: Using Unkwntech's approach, I wrote a function using jQuery and PHP. This is tested and does work! On the PHP page where you want to have the timezone as a variable, have this snippet of code somewhere near the top of the page: <?php session_start(); $timezone = $_SESSION['time']; ?> This will read the session variable "time", which we are now about to create. On the same page, in the <head>, you need to first of all include jQuery: <script type="text/javascript" src="http://code.jquery.com/jquery-latest.min.js"></script> Also in the <head>, below the jQuery, paste this: <script type="text/javascript"> $(document).ready(function() { if("<?php echo $timezone; ?>".length==0){ var visitortime = new Date(); var visitortimezone = "GMT " + -visitortime.getTimezoneOffset()/60; $.ajax({ type: "GET", url: "http://example.org/timezone.php", data: 'time='+ visitortimezone, success: function(){ location.reload(); } }); } }); </script> You may or may not have noticed, but you need to change the URL to your actual domain. One last thing. You are probably wondering what the heck timezone.php is. Well, it is simply this: (create a new file called timezone.php and point to it with the above URL) <?php session_start(); $_SESSION['time'] = $_GET['time']; ?> If this works correctly, it will first load the page, execute the JavaScript, and reload the page. You will then be able to read the $timezone variable and use it to your pleasure! It returns the current UTC/GMT time zone offset (GMT -7) or whatever timezone you are in. A: -new Date().getTimezoneOffset()/60; The method getTimezoneOffset() will subtract your time from GMT and return the number of minutes. So if you live in GMT-8, it will return 480. To put this into hours, divide by 60. Also, notice that the sign is the opposite of what you need - it's calculating GMT's offset from your time zone, not your time zone's offset from GMT. To fix this, simply multiply by -1. Also note that w3school says: The returned value is not a constant, because of the practice of using Daylight Saving Time. A: To submit the timezone offset as an HTTP header on AJAX requests with jQuery $.ajaxSetup({ beforeSend: function(xhr, settings) { xhr.setRequestHeader("X-TZ-Offset", -new Date().getTimezoneOffset()/60); } }); You can also do something similar to get the actual time zone name by using moment.tz.guess(); from http://momentjs.com/timezone/docs/#/using-timezones/guessing-user-timezone/ A: With the PHP date function you will get the date time of server on which the site is located. The only way to get the user time is to use JavaScript. But I suggest you to, if your site has registration required then the best way is to ask the user while to have registration as a compulsory field. You can list various time zones in the register page and save that in the database. After this, if the user logs in to the site then you can set the default time zone for that session as per the users’ selected time zone. You can set any specific time zone using the PHP function date_default_timezone_set. This sets the specified time zone for users. Basically the users’ time zone is goes to the client side, so we must use JavaScript for this. Below is the script to get users’ time zone using PHP and JavaScript. <?php #http://www.php.net/manual/en/timezones.php List of Time Zones function showclienttime() { if(!isset($_COOKIE['GMT_bias'])) { ?> <script type="text/javascript"> var Cookies = {}; Cookies.create = function (name, value, days) { if (days) { var date = new Date(); date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000)); var expires = "; expires=" + date.toGMTString(); } else { var expires = ""; } document.cookie = name + "=" + value + expires + "; path=/"; this[name] = value; } var now = new Date(); Cookies.create("GMT_bias",now.getTimezoneOffset(),1); window.location = "<?php echo $_SERVER['PHP_SELF'];?>"; </script> <?php } else { $fct_clientbias = $_COOKIE['GMT_bias']; } $fct_servertimedata = gettimeofday(); $fct_servertime = $fct_servertimedata['sec']; $fct_serverbias = $fct_servertimedata['minuteswest']; $fct_totalbias = $fct_serverbias – $fct_clientbias; $fct_totalbias = $fct_totalbias * 60; $fct_clienttimestamp = $fct_servertime + $fct_totalbias; $fct_time = time(); $fct_year = strftime("%Y", $fct_clienttimestamp); $fct_month = strftime("%B", $fct_clienttimestamp); $fct_day = strftime("%d", $fct_clienttimestamp); $fct_hour = strftime("%I", $fct_clienttimestamp); $fct_minute = strftime("%M", $fct_clienttimestamp); $fct_second = strftime("%S", $fct_clienttimestamp); $fct_am_pm = strftime("%p", $fct_clienttimestamp); echo $fct_day.", ".$fct_month." ".$fct_year." ( ".$fct_hour.":".$fct_minute.":".$fct_second." ".$fct_am_pm." )"; } showclienttime(); ?> But as per my point of view, it’s better to ask to the users if registration is mandatory in your project. A: I still have not seen a detailed answer here that gets the time zone. You shouldn't need to geocode by IP address or use PHP (lol) or incorrectly guess from an offset. Firstly a time zone is not just an offset from GMT. It is an area of land in which the time rules are set by local standards. Some countries have daylight savings, and will switch on DST at differing times. It's usually important to get the actual zone, not just the current offset. If you intend to store this timezone, for instance in user preferences you want the zone and not just the offset. For realtime conversions it won't matter much. Now, to get the time zone with javascript you can use this: >> new Date().toTimeString(); "15:46:04 GMT+1200 (New Zealand Standard Time)" //Use some regular expression to extract the time. However I found it easier to simply use this robust plugin which returns the Olsen formatted timezone: https://github.com/scottwater/jquery.detect_timezone A: Don't use the IP address to definitively determine location (and hence timezone)-- that's because with NAT, proxies (increasingly popular), and VPNs, IP addresses do not necessarily realistically reflect the user's actual location, but the location at which the servers implementing those protocols reside. Similar to how US area codes are no longer useful for locating a telephone user, given the popularity of number portability. IP address and other techniques shown above are useful for suggesting a default that the user can adjust/correct. A: JavaScript: function maketimus(timestampz) { var linktime = new Date(timestampz * 1000); var linkday = linktime.getDate(); var freakingmonths = new Array(); freakingmonths[0] = "jan"; freakingmonths[1] = "feb"; freakingmonths[2] = "mar"; freakingmonths[3] = "apr"; freakingmonths[4] = "may"; freakingmonths[5] = "jun"; freakingmonths[6] = "jul"; freakingmonths[7] = "aug"; freakingmonths[8] = "sep"; freakingmonths[9] = "oct"; freakingmonths[10] = "nov"; freakingmonths[11] = "dec"; var linkmonthnum = linktime.getMonth(); var linkmonth = freakingmonths[linkmonthnum]; var linkyear = linktime.getFullYear(); var linkhour = linktime.getHours(); var linkminute = linktime.getMinutes(); if (linkminute < 10) { linkminute = "0" + linkminute; } var fomratedtime = linkday + linkmonth + linkyear + " " + linkhour + ":" + linkminute + "h"; return fomratedtime; } Simply provide your times in Unix timestamp format to this function; JavaScript already knows the timezone of the user. Like this: PHP: echo '<script type="text/javascript"> var eltimio = maketimus('.$unix_timestamp_ofshiz.'); document.write(eltimio); </script><noscript>pls enable javascript</noscript>'; This will always show the times correctly based on the timezone the person has set on his/her computer clock. There is no need to ask anything to anyone and save it into places, thank god! A: Easy, just use the JavaScript getTimezoneOffset function like so: -new Date().getTimezoneOffset()/60; A: All the magic seems to be in visitortime.getTimezoneOffset() That's cool, I didn't know about that. Does it work in Internet Explorer etc? From there you should be able to use JavaScript to Ajax, set cookies whatever. I'd probably go the cookie route myself. You'll need to allow the user to change it though. We tried to use geo-location (via maxmind) to do this a while ago, and it was wrong enough to make it not worth doing. So we just let the user set it in their profile, and show a notice to users who haven't set theirs yet. A: The most popular (==standard?) way of determining the time zone I've seen around is simply asking the users themselves. If your website requires subscription, this could be saved in the users' profile data. For anon users, the dates could be displayed as UTC or GMT or some such. I'm not trying to be a smart aleck. It's just that sometimes some problems have finer solutions outside of any programming context. A: There are no HTTP headers that will report the clients timezone so far although it has been suggested to include it in the HTTP specification. If it was me, I would probably try to fetch the timezone using clientside JavaScript and then submit it to the server using Ajax or something. A: If you happen to be using OpenID for authentication, Simple Registration Extension would solve the problem for authenticated users (You'll need to convert from tz to numeric). Another option would be to infer the time zone from the user agent's country preference. This is a somewhat crude method (won't work for en-US), but makes a good approximation. A: Here is an article (with source code) that explains how to determine and use localized time in an ASP.NET (VB.NET, C#) application: It's About Time In short, the described approach relies on the JavaScript getTimezoneOffset function, which returns the value that is saved in the session cookie and used by code-behind to adjust time values between GMT and local time. The nice thing is that the user does not need to specify the time zone (the code does it automatically). There is more involved (this is why I link to the article), but provided code makes it really easy to use. I suspect that you can convert the logic to PHP and other languages (as long as you understand ASP.NET). A: It is simple with JavaScript and PHP: Even though the user can mess with his/her internal clock and/or timezone, the best way I found so far, to get the offset, remains new Date().getTimezoneOffset();. It's non-invasive, doesn't give head-aches and eliminates the need to rely on third parties. Say I have a table, users, that contains a field date_created int(13), for storing Unix timestamps; Assuming a client creates a new account, data is received by post, and I need to insert/update the date_created column with the client's Unix timestamp, not the server's. Since the timezoneOffset is needed at the time of insert/update, it is passed as an extra $_POST element when the client submits the form, thus eliminating the need to store it in sessions and/or cookies, and no additional server hits either. var off = (-new Date().getTimezoneOffset()/60).toString();//note the '-' in front which makes it return positive for negative offsets and negative for positive offsets var tzo = off == '0' ? 'GMT' : off.indexOf('-') > -1 ? 'GMT'+off : 'GMT+'+off; Say the server receives tzo as $_POST['tzo']; $ts = new DateTime('now', new DateTimeZone($_POST['tzo']); $user_time = $ts->format("F j, Y, g:i a");//will return the users current time in readable format, regardless of whether date_default_timezone() is set or not. $user_timestamp = strtotime($user_time); Insert/update date_created=$user_timestamp. When retrieving the date_created, you can convert the timestamp like so: $date_created = // Get from the database $created = date("F j, Y, g:i a",$date_created); // Return it to the user or whatever Now, this example may fit one's needs, when it comes to inserting a first timestamp... When it comes to an additional timestamp, or table, you may want to consider inserting the tzo value into the users table for future reference, or setting it as session or as a cookie. P.S. BUT what if the user travels and switches timezones. Logs in at GMT+4, travels fast to GMT-1 and logs in again. Last login would be in the future. I think... we think too much. A: You could do it on the client with moment-timezone and send the value to server; sample usage: > moment.tz.guess() "America/Asuncion" A: Getting a valid TZ Database timezone name in PHP is a two-step process: * *With JavaScript, get timezone offset in minutes through getTimezoneOffset. This offset will be positive if the local timezone is behind UTC and negative if it is ahead. So you must add an opposite sign to the offset. var timezone_offset_minutes = new Date().getTimezoneOffset(); timezone_offset_minutes = timezone_offset_minutes == 0 ? 0 : -timezone_offset_minutes; Pass this offset to PHP. *In PHP convert this offset into a valid timezone name with timezone_name_from_abbr function. // Just an example. $timezone_offset_minutes = -360; // $_GET['timezone_offset_minutes'] // Convert minutes to seconds $timezone_name = timezone_name_from_abbr("", $timezone_offset_minutes*60, false); // America/Chicago echo $timezone_name;</code></pre> I've written a blog post on it: How to Detect User Timezone in PHP. It also contains a demo. A: Try this PHP code: <?php $ip = $_SERVER['REMOTE_ADDR']; $json = file_get_contents("http://api.easyjquery.com/ips/?ip=" . $ip . "&full=true"); $json = json_decode($json,true); $timezone = $json['LocalTimeZone']; ?> A: A simple way to do it is by using: new Date().getTimezoneOffset(); A: First, understand that time zone detection in JavaScript is imperfect. You can get the local time zone offset for a particular date and time using getTimezoneOffset on an instance of the Date object, but that's not quite the same as a full IANA time zone like America/Los_Angeles. There are some options that can work though: * *Most modern browsers support IANA time zones in their implementation of the ECMAScript Internationalization API, so you can do this: const tzid = Intl.DateTimeFormat().resolvedOptions().timeZone; console.log(tzid); The result is a string containing the IANA time zone setting of the computer where the code is running. Supported environments are listed in the Intl compatibility table. Expand the DateTimeFormat section, and look at the feature named resolvedOptions().timeZone defaults to the host environment. * *Some libraries, such as Luxon use this API to determine the time zone through functions like luxon.Settings.defaultZoneName. *If you need to support an wider set of environments, such as older web browsers, you can use a library to make an educated guess at the time zone. They work by first trying the Intl API if it's available, and when it's not available, they interrogate the getTimezoneOffset function of the Date object, for several different points in time, using the results to choose an appropriate time zone from an internal data set. Both jsTimezoneDetect and moment-timezone have this functionality. // using jsTimeZoneDetect var tzid = jstz.determine().name(); // using moment-timezone var tzid = moment.tz.guess(); In both cases, the result can only be thought of as a guess. The guess may be correct in many cases, but not all of them. Additionally, these libraries have to be periodically updated to counteract the fact that many older JavaScript implementations are only aware of the current daylight saving time rule for their local time zone. More details on that here. Ultimately, a better approach is to actually ask your user for their time zone. Provide a setting that they can change. You can use one of the above options to choose a default setting, but don't make it impossible to deviate from that in your app. There's also the entirely different approach of not relying on the time zone setting of the user's computer at all. Instead, if you can gather latitude and longitude coordinates, you can resolve those to a time zone using one of these methods. This works well on mobile devices. A: Here's how I do it. This will set the PHP default timezone to the user's local timezone. Just paste the following on the top of all your pages: <?php session_start(); if(!isset($_SESSION['timezone'])) { if(!isset($_REQUEST['offset'])) { ?> <script> var d = new Date() var offset= -d.getTimezoneOffset()/60; location.href = "<?php echo $_SERVER['PHP_SELF']; ?>?offset="+offset; </script> <?php } else { $zonelist = array('Kwajalein' => -12.00, 'Pacific/Midway' => -11.00, 'Pacific/Honolulu' => -10.00, 'America/Anchorage' => -9.00, 'America/Los_Angeles' => -8.00, 'America/Denver' => -7.00, 'America/Tegucigalpa' => -6.00, 'America/New_York' => -5.00, 'America/Caracas' => -4.30, 'America/Halifax' => -4.00, 'America/St_Johns' => -3.30, 'America/Argentina/Buenos_Aires' => -3.00, 'America/Sao_Paulo' => -3.00, 'Atlantic/South_Georgia' => -2.00, 'Atlantic/Azores' => -1.00, 'Europe/Dublin' => 0, 'Europe/Belgrade' => 1.00, 'Europe/Minsk' => 2.00, 'Asia/Kuwait' => 3.00, 'Asia/Tehran' => 3.30, 'Asia/Muscat' => 4.00, 'Asia/Yekaterinburg' => 5.00, 'Asia/Kolkata' => 5.30, 'Asia/Katmandu' => 5.45, 'Asia/Dhaka' => 6.00, 'Asia/Rangoon' => 6.30, 'Asia/Krasnoyarsk' => 7.00, 'Asia/Brunei' => 8.00, 'Asia/Seoul' => 9.00, 'Australia/Darwin' => 9.30, 'Australia/Canberra' => 10.00, 'Asia/Magadan' => 11.00, 'Pacific/Fiji' => 12.00, 'Pacific/Tongatapu' => 13.00); $index = array_keys($zonelist, $_REQUEST['offset']); $_SESSION['timezone'] = $index[0]; } } date_default_timezone_set($_SESSION['timezone']); //rest of your code goes here ?> A: There's no such way to figure the timezone in the actual HTML code or any user-agent string, but what you can do is make a basic function getting it using JavaScript. I don't know how to code with JavaScript yet so my function might take time to make. However, you can try to get the actual timezone also using JavaScript with the getTzimezoneOffset() function in the Date section or simply new Date().getTimezoneOffset();. A: I think that @Matt Johnson-Pints is by far the best and a CanIuse search reveals that now it is widely adopted: https://caniuse.com/?search=Intl.DateTimeFormat().resolvedOptions().timeZone One of the challenges though is to consider why you want to know the Timezone. Because I think one of the things most people have missed is that they can change! If a user travels with his laptop from Europe to America if you had previously stored it in a database their timezone is now incorrect (even if the user never actually updates their devices timezone). This is also the problem with @Mads Kristiansen answer as well because users travel - you cannot rely on it as a given. For example, my Linux laptop has "automatic timezone" turned off. Whilst the time might update my timezone doesn't. So I believe the answer is - what do you need it for? Client side certainly seems to give an easier way to ascertain it, but both client and server side code will depend on either the user updating their timezone or it updating automatically. I might of course be wrong.
{ "language": "en", "url": "https://stackoverflow.com/questions/13", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "701" }
Q: Difference between Math.Floor() and Math.Truncate() What is the difference between Math.Floor() and Math.Truncate() in .NET? A: Math.Floor() rounds toward negative infinity Math.Truncate rounds up or down towards zero. For example: Math.Floor(-3.4) = -4 Math.Truncate(-3.4) = -3 while Math.Floor(3.4) = 3 Math.Truncate(3.4) = 3 A: Math.floor sliiiide to the left... Math.ceil sliiiide to the right... Math.truncate criiiiss crooooss (floor/ceil always towards 0) Math.round cha cha, real smooth... (go to closest side) Let's go to work! (⌐□_□) To the left... Math.floor Take it back now y'all... -- Two hops this time... -=2 Everybody clap your hands ✋✋ How low can you go? Can you go down low? All the way to the floor? if (this == "wrong") return "i don't wanna be right"; Math.truncate(x) is also the same as int(x). by removing a positive or negative fraction, you're always heading towards 0. A: Math.Floor rounds down, Math.Ceiling rounds up, and Math.Truncate rounds towards zero. Thus, Math.Truncate is like Math.Floor for positive numbers, and like Math.Ceiling for negative numbers. Here's the reference. For completeness, Math.Round rounds to the nearest integer. If the number is exactly midway between two integers, then it rounds towards the even one. Reference. See also: Pax Diablo's answer. Highly recommended! A: Math.floor() will always round down ie., it returns LESSER integer. While round() will return the NEAREST integer math.floor() Returns the largest integer less than or equal to the specified number. math.truncate() Calculates the integral part of a number. A: Some examples: Round(1.5) = 2 Round(2.5) = 2 Round(1.5, MidpointRounding.AwayFromZero) = 2 Round(2.5, MidpointRounding.AwayFromZero) = 3 Round(1.55, 1) = 1.6 Round(1.65, 1) = 1.6 Round(1.55, 1, MidpointRounding.AwayFromZero) = 1.6 Round(1.65, 1, MidpointRounding.AwayFromZero) = 1.7 Truncate(2.10) = 2 Truncate(2.00) = 2 Truncate(1.90) = 1 Truncate(1.80) = 1 A: Follow these links for the MSDN descriptions of: * *Math.Floor, which rounds down towards negative infinity. *Math.Ceiling, which rounds up towards positive infinity. *Math.Truncate, which rounds up or down towards zero. *Math.Round, which rounds to the nearest integer or specified number of decimal places. You can specify the behavior if it's exactly equidistant between two possibilities, such as rounding so that the final digit is even ("Round(2.5,MidpointRounding.ToEven)" becoming 2) or so that it's further away from zero ("Round(2.5,MidpointRounding.AwayFromZero)" becoming 3). The following diagram and table may help: -3 -2 -1 0 1 2 3 +--|------+---------+----|----+--|------+----|----+-------|-+ a b c d e a=-2.7 b=-0.5 c=0.3 d=1.5 e=2.8 ====== ====== ===== ===== ===== Floor -3 -1 0 1 2 Ceiling -2 0 1 2 3 Truncate -2 0 0 1 2 Round (ToEven) -3 0 0 2 3 Round (AwayFromZero) -3 -1 0 2 3 Note that Round is a lot more powerful than it seems, simply because it can round to a specific number of decimal places. All the others round to zero decimals always. For example: n = 3.145; a = System.Math.Round (n, 2, MidpointRounding.ToEven); // 3.14 b = System.Math.Round (n, 2, MidpointRounding.AwayFromZero); // 3.15 With the other functions, you have to use multiply/divide trickery to achieve the same effect: c = System.Math.Truncate (n * 100) / 100; // 3.14 d = System.Math.Ceiling (n * 100) / 100; // 3.15 A: They are functionally equivalent with positive numbers. The difference is in how they handle negative numbers. For example: Math.Floor(2.5) = 2 Math.Truncate(2.5) = 2 Math.Floor(-2.5) = -3 Math.Truncate(-2.5) = -2 MSDN links: - Math.Floor Method - Math.Truncate Method P.S. Beware of Math.Round it may not be what you expect. To get the "standard" rounding result use: float myFloat = 4.5; Console.WriteLine( Math.Round(myFloat) ); // writes 4 Console.WriteLine( Math.Round(myFloat, 0, MidpointRounding.AwayFromZero) ) //writes 5 Console.WriteLine( myFloat.ToString("F0") ); // writes 5 A: Math.Floor() : It gives the largest integer less than or equal to the given number. Math.Floor(3.45) =3 Math.Floor(-3.45) =-4 Math.Truncate(): It removes the decimal places of the number and replace with zero Math.Truncate(3.45)=3 Math.Truncate(-3.45)=-3 Also from above examples we can see that floor and truncate are same for positive numbers. A: Try this, Examples: Math.Floor() vs Math.Truncate() Math.Floor(2.56) = 2 Math.Floor(3.22) = 3 Math.Floor(-2.56) = -3 Math.Floor(-3.26) = -4 Math.Truncate(2.56) = 2 Math.Truncate(2.00) = 2 Math.Truncate(1.20) = 1 Math.Truncate(-3.26) = -3 Math.Truncate(-3.96) = -3 Also Math.Round() Math.Round(1.6) = 2 Math.Round(-8.56) = -9 Math.Round(8.16) = 8 Math.Round(8.50) = 8 Math.Round(8.51) = 9 math.floor() Returns the largest integer less than or equal to the specified number. MSDN system.math.floor math.truncate() Calculates the integral part of a number. MSDN system.math.truncate A: Math.Floor() rounds "toward negative infinity" in compliance to IEEE Standard 754 section 4. Math.Truncate() rounds " to the nearest integer towards zero." A: Math.Floor(): Returns the largest integer less than or equal to the specified double-precision floating-point number. Math.Round(): Rounds a value to the nearest integer or to the specified number of fractional digits. A: Truncate drops the decimal point. A: Going by the Mathematical Definition of Floor, that is, "Greatest integer less than or equal to a number", This is completely unambiguous, whereas, Truncate just removes the fractional part, which is equivalent to round towards 0.
{ "language": "en", "url": "https://stackoverflow.com/questions/14", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "491" }
Q: Filling a DataSet or a DataTable from a LINQ query result set How do you expose a LINQ query as an ASMX web service? Usually, from the business tier, I can return a typed DataSet or a DataTable which can be serialized for transport over ASMX. How can I do the same for a LINQ query? Is there a way to populate a typed DataSet or a DataTable via a LINQ query? public static MyDataTable CallMySproc() { string conn = "..."; MyDatabaseDataContext db = new MyDatabaseDataContext(conn); MyDataTable dt = new MyDataTable(); // execute a sproc via LINQ var query = from dr in db.MySproc().AsEnumerable select dr; // copy LINQ query resultset into a DataTable -this does not work ! dt = query.CopyToDataTable(); return dt; } How could I put the result set of a LINQ query into a DataSet or a DataTable? Alternatively, can the LINQ query be serializable so that I can expose it as an ASMX web service? A: As mentioned in the question, IEnumerable has a CopyToDataTable method: IEnumerable<DataRow> query = from order in orders.AsEnumerable() where order.Field<DateTime>("OrderDate") > new DateTime(2001, 8, 1) select order; // Create a table from the query. DataTable boundTable = query.CopyToDataTable<DataRow>(); Why won't that work for you? A: If you use IEnumerable as the return type, it will return your query variable directly. MyDataContext db = new MyDataContext(); IEnumerable<DataRow> query = (from order in db.Orders.AsEnumerable() select new { order.Property, order.Property2 }) as IEnumerable<DataRow>; return query.CopyToDataTable<DataRow>(); A: To perform this query against a DataContext class, you'll need to do the following: MyDataContext db = new MyDataContext(); IEnumerable<DataRow> query = (from order in db.Orders.AsEnumerable() select new { order.Property, order.Property2 }) as IEnumerable<DataRow>; return query.CopyToDataTable<DataRow>(); Without the as IEnumerable<DataRow>; you will see the following compilation error: Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Collections.Generic.IEnumerable'. An explicit conversion exists (are you missing a cast?) A: Make a set of Data Transfer Objects, a couple of mappers, and return that via the .asmx. You should never expose the database objects directly, as a change in the procedure schema will propagate to the web service consumer without you noticing it. A: For the sake of completeness, these solutions do not work for EF Core (at least not for EF Core 2.2). Casting to IEnumerable<DataRow>, as suggested in the other answers here, fails. Implementing this class and extension methods worked for me https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/implement-copytodatatable-where-type-not-a-datarow. Why it's not built into EF Core, I have not idea. A: If you use a return type of IEnumerable, you can return your query variable directly. A: Create a class object and return a list(T) of the query.
{ "language": "en", "url": "https://stackoverflow.com/questions/16", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "157" }
Q: Binary Data in MySQL How do I store binary data in MySQL? A: For a table like this: CREATE TABLE binary_data ( id INT(4) NOT NULL AUTO_INCREMENT PRIMARY KEY, description CHAR(50), bin_data LONGBLOB, filename CHAR(50), filesize CHAR(50), filetype CHAR(50) ); Here is a PHP example: <?php // store.php3 - by Florian Dittmer <dittmer@gmx.net> // Example php script to demonstrate the storing of binary files into // an sql database. More information can be found at http://www.phpbuilder.com/ ?> <html> <head><title>Store binary data into SQL Database</title></head> <body> <?php // Code that will be executed if the form has been submitted: if ($submit) { // Connect to the database (you may have to adjust // the hostname, username or password). mysql_connect("localhost", "root", "password"); mysql_select_db("binary_data"); $data = mysql_real_escape_string(fread(fopen($form_data, "r"), filesize($form_data))); $result = mysql_query("INSERT INTO binary_data (description, bin_data, filename, filesize, filetype) ". "VALUES ('$form_description', '$data', '$form_data_name', '$form_data_size', '$form_data_type')"); $id= mysql_insert_id(); print "<p>This file has the following Database ID: <b>$id</b>"; mysql_close(); } else { // else show the form to submit new data: ?> <form method="post" action="<?php echo $PHP_SELF; ?>" enctype="multipart/form-data"> File Description:<br> <input type="text" name="form_description" size="40"> <input type="hidden" name="MAX_FILE_SIZE" value="1000000"> <br>File to upload/store in database:<br> <input type="file" name="form_data" size="40"> <p><input type="submit" name="submit" value="submit"> </form> <?php } ?> </body> </html> A: I strongly recommend against storing binary data in a relational database. Relational databases are designed to work with fixed-size data; that's where their performance strength is: remember Joel's old article on why databases are so fast? because it takes exactly 1 pointer increment to move from a record to another record. If you add BLOB data of undefined and vastly varying size, you'll screw up performance. Instead, store files in the file system, and store file names in your database. A: While you haven't said what you're storing, and you may have a great reason for doing so, often the answer is 'as a filesystem reference' and the actual data is on the filesystem somewhere. http://www.onlamp.com/pub/a/onlamp/2002/07/11/MySQLtips.html A: It depends on the data you wish to store. The above example uses the LONGBLOB data type, but you should be aware that there are other binary data types: TINYBLOB/BLOB/MEDIUMBLOB/LONGBLOB VARBINARY BINARY Each has its use cases. If it is a known (short) length (e.g. packed data), BINARY or VARBINARY will work most of the time. They have the added benefit of being able to index on them. A: While it shouldn't be necessary, you could try base64 encoding data in and decoding it out. That means the db will just have ascii characters. It will take a bit more space and time, but any issue to do with the binary data will be eliminated. A: The answer by phpguy is correct but I think there is a lot of confusion in the additional details there. The basic answer is in a BLOB data type / attribute domain. BLOB is short for Binary Large Object and that column data type is specific for handling binary data. See the relevant manual page for MySQL. A: If the - not recommended - BLOB field exists, you can save data this way: mysql_query("UPDATE table SET field=X'".bin2hex($bin_data)."' WHERE id=$id"); Idea taken from here. A: When I need to store binary data I always use VARBINARY format as introduced by d0nut in one of the previous answers. You can find documentation at MySQL website under documented topic: 12.4.2 The BINARY and VARBINARY Types. If you are asking what are advantages, please read the question: why-varbinary-instead-of-varchar. A: The question also arises how to get the data into the BLOB. You can put the data in an INSERT statement, as the PHP example shows (although you should use mysql_real_escape_string instead of addslashes). If the file exists on the database server, you can also use MySQL's LOAD_FILE
{ "language": "en", "url": "https://stackoverflow.com/questions/17", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "198" }
Q: What is the fastest way to get the value of π? I'm looking for the fastest way to obtain the value of π, as a personal challenge. More specifically, I'm using ways that don't involve using #define constants like M_PI, or hard-coding the number in. The program below tests the various ways I know of. The inline assembly version is, in theory, the fastest option, though clearly not portable. I've included it as a baseline to compare against the other versions. In my tests, with built-ins, the 4 * atan(1) version is fastest on GCC 4.2, because it auto-folds the atan(1) into a constant. With -fno-builtin specified, the atan2(0, -1) version is fastest. Here's the main testing program (pitimes.c): #include <math.h> #include <stdio.h> #include <time.h> #define ITERS 10000000 #define TESTWITH(x) { \ diff = 0.0; \ time1 = clock(); \ for (i = 0; i < ITERS; ++i) \ diff += (x) - M_PI; \ time2 = clock(); \ printf("%s\t=> %e, time => %f\n", #x, diff, diffclock(time2, time1)); \ } static inline double diffclock(clock_t time1, clock_t time0) { return (double) (time1 - time0) / CLOCKS_PER_SEC; } int main() { int i; clock_t time1, time2; double diff; /* Warmup. The atan2 case catches GCC's atan folding (which would * optimise the ``4 * atan(1) - M_PI'' to a no-op), if -fno-builtin * is not used. */ TESTWITH(4 * atan(1)) TESTWITH(4 * atan2(1, 1)) #if defined(__GNUC__) && (defined(__i386__) || defined(__amd64__)) extern double fldpi(); TESTWITH(fldpi()) #endif /* Actual tests start here. */ TESTWITH(atan2(0, -1)) TESTWITH(acos(-1)) TESTWITH(2 * asin(1)) TESTWITH(4 * atan2(1, 1)) TESTWITH(4 * atan(1)) return 0; } And the inline assembly stuff (fldpi.c) that will only work for x86 and x64 systems: double fldpi() { double pi; asm("fldpi" : "=t" (pi)); return pi; } And a build script that builds all the configurations I'm testing (build.sh): #!/bin/sh gcc -O3 -Wall -c -m32 -o fldpi-32.o fldpi.c gcc -O3 -Wall -c -m64 -o fldpi-64.o fldpi.c gcc -O3 -Wall -ffast-math -m32 -o pitimes1-32 pitimes.c fldpi-32.o gcc -O3 -Wall -m32 -o pitimes2-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -fno-builtin -m32 -o pitimes3-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -ffast-math -m64 -o pitimes1-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -m64 -o pitimes2-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -fno-builtin -m64 -o pitimes3-64 pitimes.c fldpi-64.o -lm Apart from testing between various compiler flags (I've compared 32-bit against 64-bit too because the optimizations are different), I've also tried switching the order of the tests around. But still, the atan2(0, -1) version still comes out on top every time. A: Here's a general description of a technique for calculating pi that I learnt in high school. I only share this because I think it is simple enough that anyone can remember it, indefinitely, plus it teaches you the concept of "Monte-Carlo" methods -- which are statistical methods of arriving at answers that don't immediately appear to be deducible through random processes. Draw a square, and inscribe a quadrant (one quarter of a semi-circle) inside that square (a quadrant with radius equal to the side of the square, so it fills as much of the square as possible) Now throw a dart at the square, and record where it lands -- that is, choose a random point anywhere inside the square. Of course, it landed inside the square, but is it inside the semi-circle? Record this fact. Repeat this process many times -- and you will find there is a ratio of the number of points inside the semi-circle versus the total number thrown, call this ratio x. Since the area of the square is r times r, you can deduce that the area of the semi circle is x times r times r (that is, x times r squared). Hence x times 4 will give you pi. This is not a quick method to use. But it's a nice example of a Monte Carlo method. And if you look around, you may find that many problems otherwise outside your computational skills can be solved by such methods. A: In the interests of completeness, a C++ template version, which, for an optimised build, will compute an approximation of PI at compile time, and will inline to a single value. #include <iostream> template<int I> struct sign { enum {value = (I % 2) == 0 ? 1 : -1}; }; template<int I, int J> struct pi_calc { inline static double value () { return (pi_calc<I-1, J>::value () + pi_calc<I-1, J+1>::value ()) / 2.0; } }; template<int J> struct pi_calc<0, J> { inline static double value () { return (sign<J>::value * 4.0) / (2.0 * J + 1.0) + pi_calc<0, J-1>::value (); } }; template<> struct pi_calc<0, 0> { inline static double value () { return 4.0; } }; template<int I> struct pi { inline static double value () { return pi_calc<I, I>::value (); } }; int main () { std::cout.precision (12); const double pi_value = pi<10>::value (); std::cout << "pi ~ " << pi_value << std::endl; return 0; } Note for I > 10, optimised builds can be slow, likewise for non-optimised runs. For 12 iterations I believe there are around 80k calls to value() (in the absence of memoisation). A: There's actually a whole book dedicated (amongst other things) to fast methods for the computation of \pi: 'Pi and the AGM', by Jonathan and Peter Borwein (available on Amazon). I studied the AGM and related algorithms quite a bit: it's quite interesting (though sometimes non-trivial). Note that to implement most modern algorithms to compute \pi, you will need a multiprecision arithmetic library (GMP is quite a good choice, though it's been a while since I last used it). The time-complexity of the best algorithms is in O(M(n)log(n)), where M(n) is the time-complexity for the multiplication of two n-bit integers (M(n)=O(n log(n) log(log(n))) using FFT-based algorithms, which are usually needed when computing digits of \pi, and such an algorithm is implemented in GMP). Note that even though the mathematics behind the algorithms might not be trivial, the algorithms themselves are usually a few lines of pseudo-code, and their implementation is usually very straightforward (if you chose not to write your own multiprecision arithmetic :-) ). A: The following answers precisely how to do this in the fastest possible way -- with the least computing effort. Even if you don't like the answer, you have to admit that it is indeed the fastest way to get the value of PI. The FASTEST way to get the value of Pi is: * *chose your favourite programming language *load its Math library *and find that Pi is already defined there -- ready for use! In case you don't have a Math library at hand.. The SECOND FASTEST way (more universal solution) is: look up Pi on the Internet, e.g. here: http://www.eveandersson.com/pi/digits/1000000 (1 million digits .. what's your floating point precision? ) or here: http://3.141592653589793238462643383279502884197169399375105820974944592.com/ or here: http://en.wikipedia.org/wiki/Pi It's really fast to find the digits you need for whatever precision arithmetic you would like to use, and by defining a constant, you can make sure that you don't waste precious CPU time. Not only is this a partly humorous answer, but in reality, if anybody would go ahead and compute the value of Pi in a real application .. that would be a pretty big waste of CPU time, wouldn't it? At least I don't see a real application for trying to re-compute this. Also consider that NASA only uses 15 digits of Pi for calculating interplanetary travel: * *TL;DR: https://twitter.com/Rainmaker1973/status/1463477499434835968 *JPL Explanation: https://www.jpl.nasa.gov/edu/news/2016/3/16/how-many-decimals-of-pi-do-we-really-need/ Dear Moderator: please note that the OP asked: "Fastest Way to get the value of PI" A: The BBP formula allows you to compute the nth digit - in base 2 (or 16) - without having to even bother with the previous n-1 digits first :) A: Instead of defining pi as a constant, I always use acos(-1). A: This is a "classic" method, very easy to implement. This implementation in python (not the fastest language) does it: from math import pi from time import time precision = 10**6 # higher value -> higher precision # lower value -> higher speed t = time() calc = 0 for k in xrange(0, precision): calc += ((-1)**k) / (2*k+1.) calc *= 4. # this is just a little optimization t = time()-t print "Calculated: %.40f" % calc print "Constant pi: %.40f" % pi print "Difference: %.40f" % abs(calc-pi) print "Time elapsed: %s" % repr(t) You can find more information here. Anyway, the fastest way to get a precise as-much-as-you-want value of pi in python is: from gmpy import pi print pi(3000) # the rule is the same as # the precision on the previous code Here is the piece of source for the gmpy pi method, I don't think the code is as useful as the comment in this case: static char doc_pi[]="\ pi(n): returns pi with n bits of precision in an mpf object\n\ "; /* This function was originally from netlib, package bmp, by * Richard P. Brent. Paulo Cesar Pereira de Andrade converted * it to C and used it in his LISP interpreter. * * Original comments: * * sets mp pi = 3.14159... to the available precision. * uses the gauss-legendre algorithm. * this method requires time o(ln(t)m(t)), so it is slower * than mppi if m(t) = o(t**2), but would be faster for * large t if a faster multiplication algorithm were used * (see comments in mpmul). * for a description of the method, see - multiple-precision * zero-finding and the complexity of elementary function * evaluation (by r. p. brent), in analytic computational * complexity (edited by j. f. traub), academic press, 1976, 151-176. * rounding options not implemented, no guard digits used. */ static PyObject * Pygmpy_pi(PyObject *self, PyObject *args) { PympfObject *pi; int precision; mpf_t r_i2, r_i3, r_i4; mpf_t ix; ONE_ARG("pi", "i", &precision); if(!(pi = Pympf_new(precision))) { return NULL; } mpf_set_si(pi->f, 1); mpf_init(ix); mpf_set_ui(ix, 1); mpf_init2(r_i2, precision); mpf_init2(r_i3, precision); mpf_set_d(r_i3, 0.25); mpf_init2(r_i4, precision); mpf_set_d(r_i4, 0.5); mpf_sqrt(r_i4, r_i4); for (;;) { mpf_set(r_i2, pi->f); mpf_add(pi->f, pi->f, r_i4); mpf_div_ui(pi->f, pi->f, 2); mpf_mul(r_i4, r_i2, r_i4); mpf_sub(r_i2, pi->f, r_i2); mpf_mul(r_i2, r_i2, r_i2); mpf_mul(r_i2, r_i2, ix); mpf_sub(r_i3, r_i3, r_i2); mpf_sqrt(r_i4, r_i4); mpf_mul_ui(ix, ix, 2); /* Check for convergence */ if (!(mpf_cmp_si(r_i2, 0) && mpf_get_prec(r_i2) >= (unsigned)precision)) { mpf_mul(pi->f, pi->f, r_i4); mpf_div(pi->f, pi->f, r_i3); break; } } mpf_clear(ix); mpf_clear(r_i2); mpf_clear(r_i3); mpf_clear(r_i4); return (PyObject*)pi; } EDIT: I had some problems with cut and paste and indentation, you can find the source here. A: The Monte Carlo method, as mentioned, applies some great concepts but it is, clearly, not the fastest, not by a long shot, not by any reasonable measure. Also, it all depends on what kind of accuracy you are looking for. The fastest π I know of is the one with the digits hard coded. Looking at Pi and Pi[PDF], there are a lot of formulae. Here is a method that converges quickly — about 14 digits per iteration. PiFast, the current fastest application, uses this formula with the FFT. I'll just write the formula, since the code is straightforward. This formula was almost found by Ramanujan and discovered by Chudnovsky. It is actually how he calculated several billion digits of the number — so it isn't a method to disregard. The formula will overflow quickly and, since we are dividing factorials, it would be advantageous then to delay such calculations to remove terms. where, Below is the Brent–Salamin algorithm. Wikipedia mentions that when a and b are "close enough" then (a + b)² / 4t will be an approximation of π. I'm not sure what "close enough" means, but from my tests, one iteration got 2 digits, two got 7, and three had 15, of course this is with doubles, so it might have an error based on its representation and the true calculation could be more accurate. let pi_2 iters = let rec loop_ a b t p i = if i = 0 then a,b,t,p else let a_n = (a +. b) /. 2.0 and b_n = sqrt (a*.b) and p_n = 2.0 *. p in let t_n = t -. (p *. (a -. a_n) *. (a -. a_n)) in loop_ a_n b_n t_n p_n (i - 1) in let a,b,t,p = loop_ (1.0) (1.0 /. (sqrt 2.0)) (1.0/.4.0) (1.0) iters in (a +. b) *. (a +. b) /. (4.0 *. t) Lastly, how about some pi golf (800 digits)? 160 characters! int a=10000,b,c=2800,d,e,f[2801],g;main(){for(;b-c;)f[b++]=a/5;for(;d=0,g=c*2;c-=14,printf("%.4d",e+d/a),e=d%a)for(b=c;d+=f[b]*a,f[b]=d%--g,d/=g--,--b;d*=b);} A: If by fastest you mean fastest to type in the code, here's the golfscript solution: ;''6666,-2%{2+.2/@*\/10.3??2*+}*`1000<~\; A: Basically the C version of paperclip optimizer's answer, and much more simpilified: #include <stdio.h> #include <math.h> double calc_PI(int K) { static const int A = 545140134; static const int B = 13591409; static const int D = 640320; const double ID3 = 1.0 / ((double) D * (double) D * (double) D); double sum = 0.0; double b = sqrt(ID3); long long int p = 1; long long int a = B; sum += (double) p * (double) a * b; for (int k = 1; k < K; ++k) { a += A; b *= ID3; p *= (6 * k) * (6 * k - 1) * (6 * k - 2) * (6 * k - 3) * (6 * k - 4) * (6 * k - 5); p /= (3 * k) * (3 * k - 1) * (3 * k - 2) * k * k * k; p = -p; sum += (double) p * (double) a * b; } return 1.0 / (12 * sum); } int main() { for (int k = 1; k <= 5; ++k) { printf("k = %i, PI = %.16f\n", k, calc_PI(k)); } } But for more simplification, this algorithm takes Chudnovsky's formula, which I can fully simplify if you don't really understand the code. Summary: We will get a number from 1 to 5 and add it in to a function we will use to get PI. Then 3 numbers are given to you: 545140134 (A), 13591409 (B), 640320 (D). Then we will use D as a double multiplying itself 3 times into another double (ID3). We will then take the square root of ID3 into another double (b) and assign 2 numbers: 1 (p), the value of B (a). Take note that C is case-insensitive. Then a double (sum) will be created by multiplying the value's of p, a and b, all in doubles. Then a loop up until the number given for the function will start and add up A's value to a, b's value gets multiplied by ID3, p's value will be multiplied by multiple values that I hope you can understand and also gets divided by multiple values as well. The sum will add up by p, a and b once again and the loop will repeat until the value of the loop's number is greater or equal to 5. Later, the sum is multiplied by 12 and returned by the function giving us the result of PI. Okay, that was long, but I guess you will understand it... A: Pi is exactly 3! [Prof. Frink (Simpsons)] Joke, but here's one in C# (.NET-Framework required). using System; using System.Text; class Program { static void Main(string[] args) { int Digits = 100; BigNumber x = new BigNumber(Digits); BigNumber y = new BigNumber(Digits); x.ArcTan(16, 5); y.ArcTan(4, 239); x.Subtract(y); string pi = x.ToString(); Console.WriteLine(pi); } } public class BigNumber { private UInt32[] number; private int size; private int maxDigits; public BigNumber(int maxDigits) { this.maxDigits = maxDigits; this.size = (int)Math.Ceiling((float)maxDigits * 0.104) + 2; number = new UInt32[size]; } public BigNumber(int maxDigits, UInt32 intPart) : this(maxDigits) { number[0] = intPart; for (int i = 1; i < size; i++) { number[i] = 0; } } private void VerifySameSize(BigNumber value) { if (Object.ReferenceEquals(this, value)) throw new Exception("BigNumbers cannot operate on themselves"); if (value.size != this.size) throw new Exception("BigNumbers must have the same size"); } public void Add(BigNumber value) { VerifySameSize(value); int index = size - 1; while (index >= 0 && value.number[index] == 0) index--; UInt32 carry = 0; while (index >= 0) { UInt64 result = (UInt64)number[index] + value.number[index] + carry; number[index] = (UInt32)result; if (result >= 0x100000000U) carry = 1; else carry = 0; index--; } } public void Subtract(BigNumber value) { VerifySameSize(value); int index = size - 1; while (index >= 0 && value.number[index] == 0) index--; UInt32 borrow = 0; while (index >= 0) { UInt64 result = 0x100000000U + (UInt64)number[index] - value.number[index] - borrow; number[index] = (UInt32)result; if (result >= 0x100000000U) borrow = 0; else borrow = 1; index--; } } public void Multiply(UInt32 value) { int index = size - 1; while (index >= 0 && number[index] == 0) index--; UInt32 carry = 0; while (index >= 0) { UInt64 result = (UInt64)number[index] * value + carry; number[index] = (UInt32)result; carry = (UInt32)(result >> 32); index--; } } public void Divide(UInt32 value) { int index = 0; while (index < size && number[index] == 0) index++; UInt32 carry = 0; while (index < size) { UInt64 result = number[index] + ((UInt64)carry << 32); number[index] = (UInt32)(result / (UInt64)value); carry = (UInt32)(result % (UInt64)value); index++; } } public void Assign(BigNumber value) { VerifySameSize(value); for (int i = 0; i < size; i++) { number[i] = value.number[i]; } } public override string ToString() { BigNumber temp = new BigNumber(maxDigits); temp.Assign(this); StringBuilder sb = new StringBuilder(); sb.Append(temp.number[0]); sb.Append(System.Globalization.CultureInfo.CurrentCulture.NumberFormat.CurrencyDecimalSeparator); int digitCount = 0; while (digitCount < maxDigits) { temp.number[0] = 0; temp.Multiply(100000); sb.AppendFormat("{0:D5}", temp.number[0]); digitCount += 5; } return sb.ToString(); } public bool IsZero() { foreach (UInt32 item in number) { if (item != 0) return false; } return true; } public void ArcTan(UInt32 multiplicand, UInt32 reciprocal) { BigNumber X = new BigNumber(maxDigits, multiplicand); X.Divide(reciprocal); reciprocal *= reciprocal; this.Assign(X); BigNumber term = new BigNumber(maxDigits); UInt32 divisor = 1; bool subtractTerm = true; while (true) { X.Divide(reciprocal); term.Assign(X); divisor += 2; term.Divide(divisor); if (term.IsZero()) break; if (subtractTerm) this.Subtract(term); else this.Add(term); subtractTerm = !subtractTerm; } } } A: If you are willing to use an approximation, 355 / 113 is good for 6 decimal digits, and has the added advantage of being usable with integer expressions. That's not as important these days, as "floating point math co-processor" ceased to have any meaning, but it was quite important once. A: Use the Machin-like formula 176 * arctan (1/57) + 28 * arctan (1/239) - 48 * arctan (1/682) + 96 * arctan(1/12943) [; \left( 176 \arctan \frac{1}{57} + 28 \arctan \frac{1}{239} - 48 \arctan \frac{1}{682} + 96 \arctan \frac{1}{12943}\right) ;], for you TeX the World people. Implemented in Scheme, for instance: (+ (- (+ (* 176 (atan (/ 1 57))) (* 28 (atan (/ 1 239)))) (* 48 (atan (/ 1 682)))) (* 96 (atan (/ 1 12943)))) A: With doubles: 4.0 * (4.0 * Math.Atan(0.2) - Math.Atan(1.0 / 239.0)) This will be accurate up to 14 decimal places, enough to fill a double (the inaccuracy is probably because the rest of the decimals in the arc tangents are truncated). Also Seth, it's 3.141592653589793238463, not 64. A: Calculate PI at compile-time with D. ( Copied from DSource.org ) /** Calculate pi at compile time * * Compile with dmd -c pi.d */ module calcpi; import meta.math; import meta.conv; /** real evaluateSeries!(real x, real metafunction!(real y, int n) term) * * Evaluate a power series at compile time. * * Given a metafunction of the form * real term!(real y, int n), * which gives the nth term of a convergent series at the point y * (where the first term is n==1), and a real number x, * this metafunction calculates the infinite sum at the point x * by adding terms until the sum doesn't change any more. */ template evaluateSeries(real x, alias term, int n=1, real sumsofar=0.0) { static if (n>1 && sumsofar == sumsofar + term!(x, n+1)) { const real evaluateSeries = sumsofar; } else { const real evaluateSeries = evaluateSeries!(x, term, n+1, sumsofar + term!(x, n)); } } /*** Calculate atan(x) at compile time. * * Uses the Maclaurin formula * atan(z) = z - z^3/3 + Z^5/5 - Z^7/7 + ... */ template atan(real z) { const real atan = evaluateSeries!(z, atanTerm); } template atanTerm(real x, int n) { const real atanTerm = (n & 1 ? 1 : -1) * pow!(x, 2*n-1)/(2*n-1); } /// Machin's formula for pi /// pi/4 = 4 atan(1/5) - atan(1/239). pragma(msg, "PI = " ~ fcvt!(4.0 * (4*atan!(1/5.0) - atan!(1/239.0))) ); A: This version (in Delphi) is nothing special, but it is at least faster than the version Nick Hodge posted on his blog :). On my machine, it takes about 16 seconds to do a billion iterations, giving a value of 3.1415926525879 (the accurate part is in bold). program calcpi; {$APPTYPE CONSOLE} uses SysUtils; var start, finish: TDateTime; function CalculatePi(iterations: integer): double; var numerator, denominator, i: integer; sum: double; begin { PI may be approximated with this formula: 4 * (1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 .......) //} numerator := 1; denominator := 1; sum := 0; for i := 1 to iterations do begin sum := sum + (numerator/denominator); denominator := denominator + 2; numerator := -numerator; end; Result := 4 * sum; end; begin try start := Now; WriteLn(FloatToStr(CalculatePi(StrToInt(ParamStr(1))))); finish := Now; WriteLn('Seconds:' + FormatDateTime('hh:mm:ss.zz',finish-start)); except on E:Exception do Writeln(E.Classname, ': ', E.Message); end; end. A: Back in the old days, with small word sizes and slow or non-existent floating-point operations, we used to do stuff like this: /* Return approximation of n * PI; n is integer */ #define pi_times(n) (((n) * 22) / 7) For applications that don't require a lot of precision (video games, for example), this is very fast and is accurate enough. A: If you want to compute an approximation of the value of π (for some reason), you should try a binary extraction algorithm. Bellard's improvement of BBP gives does PI in O(N^2). If you want to obtain an approximation of the value of π to do calculations, then: PI = 3.141592654 Granted, that's only an approximation, and not entirely accurate. It's off by a little more than 0.00000000004102. (four ten-trillionths, about 4/10,000,000,000). If you want to do math with π, then get yourself a pencil and paper or a computer algebra package, and use π's exact value, π. If you really want a formula, this one is fun: π = -i ln(-1) A: I really like this program, because it approximates π by looking at its own area. IOCCC 1988 : westley.c #define _ -F<00||--F-OO--; int F=00,OO=00;main(){F_OO();printf("%1.3f\n",4.*-F/OO/OO);}F_OO() { _-_-_-_ _-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_-_-_-_-_ _-_-_-_-_-_-_-_ _-_-_-_ } A: Calculating π from circle area :-) <input id="range" type="range" min="10" max="960" value="10" step="50" oninput="calcPi()"> <br> <div id="cont"></div> <script> function generateCircle(width) { var c = width/2; var delta = 1.0; var str = ""; var xCount = 0; for (var x=0; x <= width; x++) { for (var y = 0; y <= width; y++) { var d = Math.sqrt((x-c)*(x-c) + (y-c)*(y-c)); if (d > (width-1)/2) { str += '.'; } else { xCount++; str += 'o'; } str += "&nbsp;" } str += "\n"; } var pi = (xCount * 4) / (width * width); return [str, pi]; } function calcPi() { var e = document.getElementById("cont"); var width = document.getElementById("range").value; e.innerHTML = "<h4>Generating circle...</h4>"; setTimeout(function() { var circ = generateCircle(width); e.innerHTML = "<pre>" + "π = " + circ[1].toFixed(2) + "\n" + circ[0] +"</pre>"; }, 200); } calcPi(); </script> A: The Chudnovsky algorithm is pretty fast if you don't mind performing a square root and a couple inverses. It converges to double precision in just 2 iterations. /* Chudnovsky algorithm for computing PI */ #include <iostream> #include <cmath> using namespace std; double calc_PI(int K=2) { static const int A = 545140134; static const int B = 13591409; static const int D = 640320; const double ID3 = 1./ (double(D)*double(D)*double(D)); double sum = 0.; double b = sqrt(ID3); long long int p = 1; long long int a = B; sum += double(p) * double(a)* b; // 2 iterations enough for double convergence for (int k=1; k<K; ++k) { // A*k + B a += A; // update denominator b *= ID3; // p = (-1)^k 6k! / 3k! k!^3 p *= (6*k)*(6*k-1)*(6*k-2)*(6*k-3)*(6*k-4)*(6*k-5); p /= (3*k)*(3*k-1)*(3*k-2) * k*k*k; p = -p; sum += double(p) * double(a)* b; } return 1./(12*sum); } int main() { cout.precision(16); cout.setf(ios::fixed); for (int k=1; k<=5; ++k) cout << "k = " << k << " PI = " << calc_PI(k) << endl; return 0; } Results: k = 1 PI = 3.1415926535897341 k = 2 PI = 3.1415926535897931 k = 3 PI = 3.1415926535897931 k = 4 PI = 3.1415926535897931 k = 5 PI = 3.1415926535897931 A: I think the value of pi is the ratio between the circumference and radius of the circle. It can be simply achieved by a regular math calculation A: Better Approach To get the output of standard constants like pi or the standard concepts, we should first go with the builtins methods available in the language that you are using. It will return a value in the fastest and best way. I am using python to run the fastest way to get the value of pi. * *pi variable of the math library. The math library stores the variable pi as a constant. math_pi.py import math print math.pi Run the script with time utility of linux /usr/bin/time -v python math_pi.py Output: Command being timed: "python math_pi.py" User time (seconds): 0.01 System time (seconds): 0.01 Percent of CPU this job got: 91% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03 * *Use arc cos method of math acos_pi.py import math print math.acos(-1) Run the script with time utility of linux /usr/bin/time -v python acos_pi.py Output: Command being timed: "python acos_pi.py" User time (seconds): 0.02 System time (seconds): 0.01 Percent of CPU this job got: 94% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03 * *use BBP formula bbp_pi.py from decimal import Decimal, getcontext getcontext().prec=100 print sum(1/Decimal(16)**k * (Decimal(4)/(8*k+1) - Decimal(2)/(8*k+4) - Decimal(1)/(8*k+5) - Decimal(1)/(8*k+6)) for k in range(100)) Run the script with time utility of linux /usr/bin/time -v python bbp_pi.py Output: Command being timed: "python c.py" User time (seconds): 0.05 System time (seconds): 0.01 Percent of CPU this job got: 98% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.06 So the best way is to use builtin methods provided by the language because they are the fastest and best to get the output. In python use math.pi A: Faster than GMPY2 and MPmath Built-ins: A Billion in 45 minutes: I tried several ways; Manchin, AGM and Chudnovsky Bros. Chudnovsky with Binary Split was the fastest: My github : https://github.com/Overboard-code/Pi-Pourri My Binary Split Chudnovsky is about twice the speed of the builtin gmpy2.const_pi(). MPmath.mp.pi() took 50 minutes for a billion, so it was almost as fast as the Chudnovsky. I would greatly appreciate performance tips as well. I am not sure my code is perfect. It is 100% accurate (all formula agree to 100 million) but maybe could be faster? I tried gmpy2.const_pi() to 100 million digits and it took 300 seconds vs. 150 seconds for the Chudnovsky on the same machine. pi.txt and pi2.txt were the same. I got to a billion digits on my old i7 16GB laptop in less than an hour. Here is a snippet of the fastest of the 12 methods I tried: class PiChudnovsky: """Version of Chudnovsky Bros using Binary Splitting So far this is the winner for fastest time to a million digits on my older intel i7 """ A = mpz(13591409) B = mpz(545140134) C = mpz(640320) D = mpz(426880) E = mpz(10005) C3_24 = pow(C, mpz(3)) // mpz(24) #DIGITS_PER_TERM = math.log(53360 ** 3) / math.log(10) #=> 14.181647462725476 DIGITS_PER_TERM = 14.181647462725476 MMILL = mpz(1000000) def __init__(self,ndigits): """ Initialization :param int ndigits: digits of PI computation """ self.ndigits = ndigits self.n = mpz(self.ndigits // self.DIGITS_PER_TERM + 1) self.prec = mpz((self.ndigits + 1) * LOG2_10) self.one_sq = pow(mpz(10),mpz(2 * ndigits)) self.sqrt_c = isqrt(self.E * self.one_sq) self.iters = mpz(0) self.start_time = 0 def compute(self): """ Computation """ try: self.start_time = time.time() logging.debug("Starting {} formula to {:,} decimal places" .format(name,ndigits) ) __, q, t = self.__bs(mpz(0), self.n) # p is just for recursion pi = (q * self.D * self.sqrt_c) // t logging.debug('{} calulation Done! {:,} iterations and {:.2f} seconds.' .format( name, int(self.iters),time.time() - self.start_time)) get_context().precision= int((self.ndigits+10) * LOG2_10) pi_s = pi.digits() # digits() gmpy2 creates a string pi_o = pi_s[:1] + "." + pi_s[1:] return pi_o,int(self.iters),time.time() - self.start_time except Exception as e: print (e.message, e.args) raise def __bs(self, a, b): """ PQT computation by BSA(= Binary Splitting Algorithm) :param int a: positive integer :param int b: positive integer :return list [int p_ab, int q_ab, int t_ab] """ try: self.iters += mpz(1) if self.iters % self.MMILL == mpz(0): logging.debug('Chudnovsky ... {:,} iterations and {:.2f} seconds.' .format( int(self.iters),time.time() - self.start_time)) if a + mpz(1) == b: if a == mpz(0): p_ab = q_ab = mpz(1) else: p_ab = mpz((mpz(6) * a - mpz(5)) * (mpz(2) * a - mpz(1)) * (mpz(6) * a - mpz(1))) q_ab = pow(a,mpz(3)) * self.C3_24 t_ab = p_ab * (self.A + self.B * a) if a & 1: t_ab *= mpz(-1) else: m = (a + b) // mpz(2) p_am, q_am, t_am = self.__bs(a, m) p_mb, q_mb, t_mb = self.__bs(m, b) p_ab = p_am * p_mb q_ab = q_am * q_mb t_ab = q_mb * t_am + p_am * t_mb return [p_ab, q_ab, t_ab] except Exception as e: print (e.message, e.args) raise Here is the output of 1,000,000,000 digits less than 45 minutes: python pi-pourri.py -v -d 1,000,000,000 -a 10 [INFO] 2022-10-03 09:22:51,860 <module>: MainProcess Computing π to 1,000,000,000 digits. [DEBUG] 2022-10-03 09:25:00,543 compute: MainProcess Starting Chudnovsky brothers 1988 π = (Q(0, N) / 12T(0, N) + 12AQ(0, N))**(C**(3/2)) formula to 1,000,000,000 decimal places [DEBUG] 2022-10-03 09:25:04,995 __bs: MainProcess Chudnovsky ... 1,000,000 iterations and 4.45 seconds. [DEBUG] 2022-10-03 09:25:10,836 __bs: MainProcess Chudnovsky ... 2,000,000 iterations and 10.29 seconds. [DEBUG] 2022-10-03 09:25:18,227 __bs: MainProcess Chudnovsky ... 3,000,000 iterations and 17.68 seconds. [DEBUG] 2022-10-03 09:25:24,512 __bs: MainProcess Chudnovsky ... 4,000,000 iterations and 23.97 seconds. [DEBUG] 2022-10-03 09:25:35,670 __bs: MainProcess Chudnovsky ... 5,000,000 iterations and 35.13 seconds. [DEBUG] 2022-10-03 09:25:41,376 __bs: MainProcess Chudnovsky ... 6,000,000 iterations and 40.83 seconds. [DEBUG] 2022-10-03 09:25:49,238 __bs: MainProcess Chudnovsky ... 7,000,000 iterations and 48.69 seconds. [DEBUG] 2022-10-03 09:25:55,646 __bs: MainProcess Chudnovsky ... 8,000,000 iterations and 55.10 seconds. [DEBUG] 2022-10-03 09:26:15,043 __bs: MainProcess Chudnovsky ... 9,000,000 iterations and 74.50 seconds. [DEBUG] 2022-10-03 09:26:21,437 __bs: MainProcess Chudnovsky ... 10,000,000 iterations and 80.89 seconds. [DEBUG] 2022-10-03 09:26:26,587 __bs: MainProcess Chudnovsky ... 11,000,000 iterations and 86.04 seconds. [DEBUG] 2022-10-03 09:26:34,777 __bs: MainProcess Chudnovsky ... 12,000,000 iterations and 94.23 seconds. [DEBUG] 2022-10-03 09:26:41,231 __bs: MainProcess Chudnovsky ... 13,000,000 iterations and 100.69 seconds. [DEBUG] 2022-10-03 09:26:52,972 __bs: MainProcess Chudnovsky ... 14,000,000 iterations and 112.43 seconds. [DEBUG] 2022-10-03 09:26:59,517 __bs: MainProcess Chudnovsky ... 15,000,000 iterations and 118.97 seconds. [DEBUG] 2022-10-03 09:27:07,932 __bs: MainProcess Chudnovsky ... 16,000,000 iterations and 127.39 seconds. [DEBUG] 2022-10-03 09:27:14,036 __bs: MainProcess Chudnovsky ... 17,000,000 iterations and 133.49 seconds. [DEBUG] 2022-10-03 09:27:51,629 __bs: MainProcess Chudnovsky ... 18,000,000 iterations and 171.09 seconds. [DEBUG] 2022-10-03 09:27:58,176 __bs: MainProcess Chudnovsky ... 19,000,000 iterations and 177.63 seconds. [DEBUG] 2022-10-03 09:28:06,704 __bs: MainProcess Chudnovsky ... 20,000,000 iterations and 186.16 seconds. [DEBUG] 2022-10-03 09:28:13,376 __bs: MainProcess Chudnovsky ... 21,000,000 iterations and 192.83 seconds. [DEBUG] 2022-10-03 09:28:18,737 __bs: MainProcess Chudnovsky ... 22,000,000 iterations and 198.19 seconds. [DEBUG] 2022-10-03 09:28:31,095 __bs: MainProcess Chudnovsky ... 23,000,000 iterations and 210.55 seconds. [DEBUG] 2022-10-03 09:28:37,789 __bs: MainProcess Chudnovsky ... 24,000,000 iterations and 217.25 seconds. [DEBUG] 2022-10-03 09:28:46,171 __bs: MainProcess Chudnovsky ... 25,000,000 iterations and 225.63 seconds. [DEBUG] 2022-10-03 09:28:52,933 __bs: MainProcess Chudnovsky ... 26,000,000 iterations and 232.39 seconds. [DEBUG] 2022-10-03 09:29:13,524 __bs: MainProcess Chudnovsky ... 27,000,000 iterations and 252.98 seconds. [DEBUG] 2022-10-03 09:29:19,676 __bs: MainProcess Chudnovsky ... 28,000,000 iterations and 259.13 seconds. [DEBUG] 2022-10-03 09:29:28,196 __bs: MainProcess Chudnovsky ... 29,000,000 iterations and 267.65 seconds. [DEBUG] 2022-10-03 09:29:34,720 __bs: MainProcess Chudnovsky ... 30,000,000 iterations and 274.18 seconds. [DEBUG] 2022-10-03 09:29:47,075 __bs: MainProcess Chudnovsky ... 31,000,000 iterations and 286.53 seconds. [DEBUG] 2022-10-03 09:29:53,746 __bs: MainProcess Chudnovsky ... 32,000,000 iterations and 293.20 seconds. [DEBUG] 2022-10-03 09:29:59,099 __bs: MainProcess Chudnovsky ... 33,000,000 iterations and 298.56 seconds. [DEBUG] 2022-10-03 09:30:07,511 __bs: MainProcess Chudnovsky ... 34,000,000 iterations and 306.97 seconds. [DEBUG] 2022-10-03 09:30:14,279 __bs: MainProcess Chudnovsky ... 35,000,000 iterations and 313.74 seconds. [DEBUG] 2022-10-03 09:31:31,710 __bs: MainProcess Chudnovsky ... 36,000,000 iterations and 391.17 seconds. [DEBUG] 2022-10-03 09:31:38,454 __bs: MainProcess Chudnovsky ... 37,000,000 iterations and 397.91 seconds. [DEBUG] 2022-10-03 09:31:46,437 __bs: MainProcess Chudnovsky ... 38,000,000 iterations and 405.89 seconds. [DEBUG] 2022-10-03 09:31:53,285 __bs: MainProcess Chudnovsky ... 39,000,000 iterations and 412.74 seconds. [DEBUG] 2022-10-03 09:32:05,602 __bs: MainProcess Chudnovsky ... 40,000,000 iterations and 425.06 seconds. [DEBUG] 2022-10-03 09:32:12,220 __bs: MainProcess Chudnovsky ... 41,000,000 iterations and 431.68 seconds. [DEBUG] 2022-10-03 09:32:20,708 __bs: MainProcess Chudnovsky ... 42,000,000 iterations and 440.17 seconds. [DEBUG] 2022-10-03 09:32:27,552 __bs: MainProcess Chudnovsky ... 43,000,000 iterations and 447.01 seconds. [DEBUG] 2022-10-03 09:32:32,986 __bs: MainProcess Chudnovsky ... 44,000,000 iterations and 452.44 seconds. [DEBUG] 2022-10-03 09:32:53,904 __bs: MainProcess Chudnovsky ... 45,000,000 iterations and 473.36 seconds. [DEBUG] 2022-10-03 09:33:00,832 __bs: MainProcess Chudnovsky ... 46,000,000 iterations and 480.29 seconds. [DEBUG] 2022-10-03 09:33:09,198 __bs: MainProcess Chudnovsky ... 47,000,000 iterations and 488.66 seconds. [DEBUG] 2022-10-03 09:33:16,000 __bs: MainProcess Chudnovsky ... 48,000,000 iterations and 495.46 seconds. [DEBUG] 2022-10-03 09:33:27,921 __bs: MainProcess Chudnovsky ... 49,000,000 iterations and 507.38 seconds. [DEBUG] 2022-10-03 09:33:34,778 __bs: MainProcess Chudnovsky ... 50,000,000 iterations and 514.24 seconds. [DEBUG] 2022-10-03 09:33:43,298 __bs: MainProcess Chudnovsky ... 51,000,000 iterations and 522.76 seconds. [DEBUG] 2022-10-03 09:33:49,959 __bs: MainProcess Chudnovsky ... 52,000,000 iterations and 529.42 seconds. [DEBUG] 2022-10-03 09:34:29,294 __bs: MainProcess Chudnovsky ... 53,000,000 iterations and 568.75 seconds. [DEBUG] 2022-10-03 09:34:36,176 __bs: MainProcess Chudnovsky ... 54,000,000 iterations and 575.63 seconds. [DEBUG] 2022-10-03 09:34:41,576 __bs: MainProcess Chudnovsky ... 55,000,000 iterations and 581.03 seconds. [DEBUG] 2022-10-03 09:34:50,161 __bs: MainProcess Chudnovsky ... 56,000,000 iterations and 589.62 seconds. [DEBUG] 2022-10-03 09:34:56,811 __bs: MainProcess Chudnovsky ... 57,000,000 iterations and 596.27 seconds. [DEBUG] 2022-10-03 09:35:09,382 __bs: MainProcess Chudnovsky ... 58,000,000 iterations and 608.84 seconds. [DEBUG] 2022-10-03 09:35:16,206 __bs: MainProcess Chudnovsky ... 59,000,000 iterations and 615.66 seconds. [DEBUG] 2022-10-03 09:35:24,295 __bs: MainProcess Chudnovsky ... 60,000,000 iterations and 623.75 seconds. [DEBUG] 2022-10-03 09:35:31,095 __bs: MainProcess Chudnovsky ... 61,000,000 iterations and 630.55 seconds. [DEBUG] 2022-10-03 09:35:52,139 __bs: MainProcess Chudnovsky ... 62,000,000 iterations and 651.60 seconds. [DEBUG] 2022-10-03 09:35:58,781 __bs: MainProcess Chudnovsky ... 63,000,000 iterations and 658.24 seconds. [DEBUG] 2022-10-03 09:36:07,399 __bs: MainProcess Chudnovsky ... 64,000,000 iterations and 666.86 seconds. [DEBUG] 2022-10-03 09:36:12,847 __bs: MainProcess Chudnovsky ... 65,000,000 iterations and 672.30 seconds. [DEBUG] 2022-10-03 09:36:19,763 __bs: MainProcess Chudnovsky ... 66,000,000 iterations and 679.22 seconds. [DEBUG] 2022-10-03 09:36:32,351 __bs: MainProcess Chudnovsky ... 67,000,000 iterations and 691.81 seconds. [DEBUG] 2022-10-03 09:36:39,078 __bs: MainProcess Chudnovsky ... 68,000,000 iterations and 698.53 seconds. [DEBUG] 2022-10-03 09:36:47,830 __bs: MainProcess Chudnovsky ... 69,000,000 iterations and 707.29 seconds. [DEBUG] 2022-10-03 09:36:54,701 __bs: MainProcess Chudnovsky ... 70,000,000 iterations and 714.16 seconds. [DEBUG] 2022-10-03 09:39:39,357 __bs: MainProcess Chudnovsky ... 71,000,000 iterations and 878.81 seconds. [DEBUG] 2022-10-03 09:39:46,199 __bs: MainProcess Chudnovsky ... 72,000,000 iterations and 885.66 seconds. [DEBUG] 2022-10-03 09:39:54,956 __bs: MainProcess Chudnovsky ... 73,000,000 iterations and 894.41 seconds. [DEBUG] 2022-10-03 09:40:01,639 __bs: MainProcess Chudnovsky ... 74,000,000 iterations and 901.10 seconds. [DEBUG] 2022-10-03 09:40:14,219 __bs: MainProcess Chudnovsky ... 75,000,000 iterations and 913.68 seconds. [DEBUG] 2022-10-03 09:40:19,680 __bs: MainProcess Chudnovsky ... 76,000,000 iterations and 919.14 seconds. [DEBUG] 2022-10-03 09:40:26,625 __bs: MainProcess Chudnovsky ... 77,000,000 iterations and 926.08 seconds. [DEBUG] 2022-10-03 09:40:35,212 __bs: MainProcess Chudnovsky ... 78,000,000 iterations and 934.67 seconds. [DEBUG] 2022-10-03 09:40:41,914 __bs: MainProcess Chudnovsky ... 79,000,000 iterations and 941.37 seconds. [DEBUG] 2022-10-03 09:41:03,218 __bs: MainProcess Chudnovsky ... 80,000,000 iterations and 962.68 seconds. [DEBUG] 2022-10-03 09:41:10,213 __bs: MainProcess Chudnovsky ... 81,000,000 iterations and 969.67 seconds. [DEBUG] 2022-10-03 09:41:18,344 __bs: MainProcess Chudnovsky ... 82,000,000 iterations and 977.80 seconds. [DEBUG] 2022-10-03 09:41:25,261 __bs: MainProcess Chudnovsky ... 83,000,000 iterations and 984.72 seconds. [DEBUG] 2022-10-03 09:41:37,663 __bs: MainProcess Chudnovsky ... 84,000,000 iterations and 997.12 seconds. [DEBUG] 2022-10-03 09:41:44,680 __bs: MainProcess Chudnovsky ... 85,000,000 iterations and 1004.14 seconds. [DEBUG] 2022-10-03 09:41:53,411 __bs: MainProcess Chudnovsky ... 86,000,000 iterations and 1012.87 seconds. [DEBUG] 2022-10-03 09:41:58,926 __bs: MainProcess Chudnovsky ... 87,000,000 iterations and 1018.38 seconds. [DEBUG] 2022-10-03 09:42:05,858 __bs: MainProcess Chudnovsky ... 88,000,000 iterations and 1025.32 seconds. [DEBUG] 2022-10-03 09:42:46,163 __bs: MainProcess Chudnovsky ... 89,000,000 iterations and 1065.62 seconds. [DEBUG] 2022-10-03 09:42:53,054 __bs: MainProcess Chudnovsky ... 90,000,000 iterations and 1072.51 seconds. [DEBUG] 2022-10-03 09:43:02,030 __bs: MainProcess Chudnovsky ... 91,000,000 iterations and 1081.49 seconds. [DEBUG] 2022-10-03 09:43:09,192 __bs: MainProcess Chudnovsky ... 92,000,000 iterations and 1088.65 seconds. [DEBUG] 2022-10-03 09:43:21,533 __bs: MainProcess Chudnovsky ... 93,000,000 iterations and 1100.99 seconds. [DEBUG] 2022-10-03 09:43:28,643 __bs: MainProcess Chudnovsky ... 94,000,000 iterations and 1108.10 seconds. [DEBUG] 2022-10-03 09:43:37,372 __bs: MainProcess Chudnovsky ... 95,000,000 iterations and 1116.83 seconds. [DEBUG] 2022-10-03 09:43:44,558 __bs: MainProcess Chudnovsky ... 96,000,000 iterations and 1124.02 seconds. [DEBUG] 2022-10-03 09:44:06,555 __bs: MainProcess Chudnovsky ... 97,000,000 iterations and 1146.01 seconds. [DEBUG] 2022-10-03 09:44:12,220 __bs: MainProcess Chudnovsky ... 98,000,000 iterations and 1151.68 seconds. [DEBUG] 2022-10-03 09:44:19,278 __bs: MainProcess Chudnovsky ... 99,000,000 iterations and 1158.74 seconds. [DEBUG] 2022-10-03 09:44:28,323 __bs: MainProcess Chudnovsky ... 100,000,000 iterations and 1167.78 seconds. [DEBUG] 2022-10-03 09:44:35,211 __bs: MainProcess Chudnovsky ... 101,000,000 iterations and 1174.67 seconds. [DEBUG] 2022-10-03 09:44:48,331 __bs: MainProcess Chudnovsky ... 102,000,000 iterations and 1187.79 seconds. [DEBUG] 2022-10-03 09:44:54,835 __bs: MainProcess Chudnovsky ... 103,000,000 iterations and 1194.29 seconds. [DEBUG] 2022-10-03 09:45:03,869 __bs: MainProcess Chudnovsky ... 104,000,000 iterations and 1203.33 seconds. [DEBUG] 2022-10-03 09:45:10,967 __bs: MainProcess Chudnovsky ... 105,000,000 iterations and 1210.42 seconds. [DEBUG] 2022-10-03 09:46:32,760 __bs: MainProcess Chudnovsky ... 106,000,000 iterations and 1292.22 seconds. [DEBUG] 2022-10-03 09:46:39,872 __bs: MainProcess Chudnovsky ... 107,000,000 iterations and 1299.33 seconds. [DEBUG] 2022-10-03 09:46:48,948 __bs: MainProcess Chudnovsky ... 108,000,000 iterations and 1308.41 seconds. [DEBUG] 2022-10-03 09:46:54,611 __bs: MainProcess Chudnovsky ... 109,000,000 iterations and 1314.07 seconds. [DEBUG] 2022-10-03 09:47:01,727 __bs: MainProcess Chudnovsky ... 110,000,000 iterations and 1321.18 seconds. [DEBUG] 2022-10-03 09:47:14,525 __bs: MainProcess Chudnovsky ... 111,000,000 iterations and 1333.98 seconds. [DEBUG] 2022-10-03 09:47:21,682 __bs: MainProcess Chudnovsky ... 112,000,000 iterations and 1341.14 seconds. [DEBUG] 2022-10-03 09:47:30,610 __bs: MainProcess Chudnovsky ... 113,000,000 iterations and 1350.07 seconds. [DEBUG] 2022-10-03 09:47:37,176 __bs: MainProcess Chudnovsky ... 114,000,000 iterations and 1356.63 seconds. [DEBUG] 2022-10-03 09:47:59,642 __bs: MainProcess Chudnovsky ... 115,000,000 iterations and 1379.10 seconds. [DEBUG] 2022-10-03 09:48:06,702 __bs: MainProcess Chudnovsky ... 116,000,000 iterations and 1386.16 seconds. [DEBUG] 2022-10-03 09:48:15,483 __bs: MainProcess Chudnovsky ... 117,000,000 iterations and 1394.94 seconds. [DEBUG] 2022-10-03 09:48:22,537 __bs: MainProcess Chudnovsky ... 118,000,000 iterations and 1401.99 seconds. [DEBUG] 2022-10-03 09:48:35,714 __bs: MainProcess Chudnovsky ... 119,000,000 iterations and 1415.17 seconds. [DEBUG] 2022-10-03 09:48:41,321 __bs: MainProcess Chudnovsky ... 120,000,000 iterations and 1420.78 seconds. [DEBUG] 2022-10-03 09:48:48,408 __bs: MainProcess Chudnovsky ... 121,000,000 iterations and 1427.87 seconds. [DEBUG] 2022-10-03 09:48:57,138 __bs: MainProcess Chudnovsky ... 122,000,000 iterations and 1436.60 seconds. [DEBUG] 2022-10-03 09:49:04,328 __bs: MainProcess Chudnovsky ... 123,000,000 iterations and 1443.79 seconds. [DEBUG] 2022-10-03 09:49:46,274 __bs: MainProcess Chudnovsky ... 124,000,000 iterations and 1485.73 seconds. [DEBUG] 2022-10-03 09:49:52,833 __bs: MainProcess Chudnovsky ... 125,000,000 iterations and 1492.29 seconds. [DEBUG] 2022-10-03 09:50:01,786 __bs: MainProcess Chudnovsky ... 126,000,000 iterations and 1501.24 seconds. [DEBUG] 2022-10-03 09:50:08,975 __bs: MainProcess Chudnovsky ... 127,000,000 iterations and 1508.43 seconds. [DEBUG] 2022-10-03 09:50:21,850 __bs: MainProcess Chudnovsky ... 128,000,000 iterations and 1521.31 seconds. [DEBUG] 2022-10-03 09:50:28,962 __bs: MainProcess Chudnovsky ... 129,000,000 iterations and 1528.42 seconds. [DEBUG] 2022-10-03 09:50:34,594 __bs: MainProcess Chudnovsky ... 130,000,000 iterations and 1534.05 seconds. [DEBUG] 2022-10-03 09:50:43,647 __bs: MainProcess Chudnovsky ... 131,000,000 iterations and 1543.10 seconds. [DEBUG] 2022-10-03 09:50:50,724 __bs: MainProcess Chudnovsky ... 132,000,000 iterations and 1550.18 seconds. [DEBUG] 2022-10-03 09:51:12,742 __bs: MainProcess Chudnovsky ... 133,000,000 iterations and 1572.20 seconds. [DEBUG] 2022-10-03 09:51:19,799 __bs: MainProcess Chudnovsky ... 134,000,000 iterations and 1579.26 seconds. [DEBUG] 2022-10-03 09:51:28,824 __bs: MainProcess Chudnovsky ... 135,000,000 iterations and 1588.28 seconds. [DEBUG] 2022-10-03 09:51:35,324 __bs: MainProcess Chudnovsky ... 136,000,000 iterations and 1594.78 seconds. [DEBUG] 2022-10-03 09:51:48,419 __bs: MainProcess Chudnovsky ... 137,000,000 iterations and 1607.88 seconds. [DEBUG] 2022-10-03 09:51:55,634 __bs: MainProcess Chudnovsky ... 138,000,000 iterations and 1615.09 seconds. [DEBUG] 2022-10-03 09:52:04,435 __bs: MainProcess Chudnovsky ... 139,000,000 iterations and 1623.89 seconds. [DEBUG] 2022-10-03 09:52:11,583 __bs: MainProcess Chudnovsky ... 140,000,000 iterations and 1631.04 seconds. [DEBUG] 2022-10-03 09:52:17,222 __bs: MainProcess Chudnovsky ... 141,000,000 iterations and 1636.68 seconds. [DEBUG] 2022-10-03 10:02:43,939 compute: MainProcess Chudnovsky brothers 1988 π = (Q(0, N) / 12T(0, N) + 12AQ(0, N))**(C**(3/2)) calulation Done! 141,027,339 iterations and 2263.39 seconds. [INFO] 2022-10-03 10:09:07,119 <module>: MainProcess Last 5 digits of π were 45519 as expected at offset 999,999,995 [INFO] 2022-10-03 10:09:07,119 <module>: MainProcess Calculated π to 1,000,000,000 digits using a formula of: 10 Chudnovsky brothers 1988 π = (Q(0, N) / 12T(0, N) + 12AQ(0, N))**(C**(3/2)) [INFO] 2022-10-03 10:09:07,120 <module>: MainProcess Calculation took 141,027,339 iterations and 0:44:06.398345. math_pi.pi(b = 1000000) is faster to a million. About 40 times faster. But it cannot go to a Billion. 1 Million is the most digits. The GMPY Builtin looks like: python pi-pourri.py -v -d 1,000,000,000 -a 11 [INFO] 2022-10-03 14:33:34,729 <module>: MainProcess Computing π to 1,000,000,000 digits. [DEBUG] 2022-10-03 14:33:34,729 compute: MainProcess Starting const_pi() function from the gmpy2 library formula to 1,000,000,000 decimal places [DEBUG] 2022-10-03 15:46:46,575 compute: MainProcess const_pi() function from the gmpy2 library calulation Done! 1 iterations and 4391.85 seconds. [INFO] 2022-10-03 15:46:46,575 <module>: MainProcess Last 5 digits of π were 45519 as expected at offset 999,999,995 [INFO] 2022-10-03 15:46:46,575 <module>: MainProcess Calculated π to 1,000,000,000 digits using a formula of: 11 const_pi() function from the gmpy2 library [INFO] 2022-10-03 15:46:46,575 <module>: MainProcess Calculation took 1 iterations and 1:13:11.845652. The MPmath Builtin is almost as fast. About 12% (6 minutes) slower: python pi-pourri.py -v -a 12 -d 1,000,000,000 [INFO] 2022-10-04 09:10:37,085 <module>: MainProcess Computing π to 1,000,000,000 digits. [DEBUG] 2022-10-04 09:10:37,085 compute: MainProcess Starting mp.pi() function from the mpmath library formula to 1,000,000,000 decimal places [DEBUG] 2022-10-04 10:01:25,321 compute: MainProcess mp.pi() function from the mpmath library calulation Done! 1 iterations and 3048.22 seconds. [INFO] 2022-10-04 10:01:25,338 <module>: MainProcess Last 5 digits of π were 45519 as expected at offset 999,999,995 [INFO] 2022-10-04 10:01:25,340 <module>: MainProcess Calculated π to 1,000,000,000 digits using a formula of: 12 mp.pi() function from the mpmath library [INFO] 2022-10-04 10:01:25,343 <module>: MainProcess Calculation took 1 iterations and 0:50:48.250337.
{ "language": "en", "url": "https://stackoverflow.com/questions/19", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "351" }
Q: Throw an error preventing a table update in a MySQL trigger If I have a trigger before the update on a table, how can I throw an error that prevents the update on that table? A: CREATE TRIGGER sample_trigger_msg BEFORE INSERT FOR EACH ROW BEGIN IF(NEW.important_value) < (1*2) THEN DECLARE dummy INT; SELECT Enter your Message Here!!! INTO dummy FROM mytable WHERE mytable.id=new.id END IF; END; A: Here is one hack that may work. It isn't clean, but it looks like it might work: Essentially, you just try to update a column that doesn't exist. A: Another (hack) method (if you are not on 5.5+ for some reason) that you can use: If you have a required field, then within a trigger set the required field to an invalid value such as NULL. This will work for both INSERT and UPDATE. Do note that if NULL is a valid value for the required field (for some crazy reason) then this approach will not work. BEGIN -- Force one of the following to be assigned otherwise set required field to null which will throw an error IF (NEW.`nullable_field_1` IS NULL AND NEW.`nullable_field_2` IS NULL) THEN SET NEW.`required_id_field`=NULL; END IF; END If you are on 5.5+ then you can use the signal state as described in other answers: BEGIN -- Force one of the following to be assigned otherwise use signal sqlstate to throw a unique error IF (NEW.`nullable_field_1` IS NULL AND NEW.`nullable_field_2` IS NULL) THEN SIGNAL SQLSTATE '45000' set message_text='A unique identifier for nullable_field_1 OR nullable_field_2 is required!'; END IF; END A: Unfortunately, the answer provided by @RuiDC does not work in MySQL versions prior to 5.5 because there is no implementation of SIGNAL for stored procedures. The solution I've found is to simulate a signal throwing a table_name doesn't exist error, pushing a customized error message into the table_name. The hack could be implemented using triggers or using a stored procedure. I describe both options below following the example used by @RuiDC. Using triggers DELIMITER $$ -- before inserting new id DROP TRIGGER IF EXISTS before_insert_id$$ CREATE TRIGGER before_insert_id BEFORE INSERT ON test FOR EACH ROW BEGIN -- condition to check IF NEW.id < 0 THEN -- hack to solve absence of SIGNAL/prepared statements in triggers UPDATE `Error: invalid_id_test` SET x=1; END IF; END$$ DELIMITER ; Using a stored procedure Stored procedures allows you to use dynamic sql, which makes possible the encapsulation of the error generation functionality in one procedure. The counterpoint is that we should control the applications insert/update methods, so they use only our stored procedure (not granting direct privileges to INSERT/UPDATE). DELIMITER $$ -- my_signal procedure CREATE PROCEDURE `my_signal`(in_errortext VARCHAR(255)) BEGIN SET @sql=CONCAT('UPDATE `', in_errortext, '` SET x=1'); PREPARE my_signal_stmt FROM @sql; EXECUTE my_signal_stmt; DEALLOCATE PREPARE my_signal_stmt; END$$ CREATE PROCEDURE insert_test(p_id INT) BEGIN IF NEW.id < 0 THEN CALL my_signal('Error: invalid_id_test; Id must be a positive integer'); ELSE INSERT INTO test (id) VALUES (p_id); END IF; END$$ DELIMITER ; A: As of MySQL 5.5, you can use the SIGNAL syntax to throw an exception: signal sqlstate '45000' set message_text = 'My Error Message'; State 45000 is a generic state representing "unhandled user-defined exception". Here is a more complete example of the approach: delimiter // use test// create table trigger_test ( id int not null )// drop trigger if exists trg_trigger_test_ins // create trigger trg_trigger_test_ins before insert on trigger_test for each row begin declare msg varchar(128); if new.id < 0 then set msg = concat('MyTriggerError: Trying to insert a negative value in trigger_test: ', cast(new.id as char)); signal sqlstate '45000' set message_text = msg; end if; end // delimiter ; -- run the following as seperate statements: insert into trigger_test values (1), (-1), (2); -- everything fails as one row is bad select * from trigger_test; insert into trigger_test values (1); -- succeeds as expected insert into trigger_test values (-1); -- fails as expected select * from trigger_test; A: The following procedure is (on mysql5) a way to throw custom errors , and log them at the same time: create table mysql_error_generator(error_field varchar(64) unique) engine INNODB; DELIMITER $$ CREATE PROCEDURE throwCustomError(IN errorText VARCHAR(44)) BEGIN DECLARE errorWithDate varchar(64); select concat("[",DATE_FORMAT(now(),"%Y%m%d %T"),"] ", errorText) into errorWithDate; INSERT IGNORE INTO mysql_error_generator(error_field) VALUES (errorWithDate); INSERT INTO mysql_error_generator(error_field) VALUES (errorWithDate); END; $$ DELIMITER ; call throwCustomError("Custom error message with log support."); A: DELIMITER @@ DROP TRIGGER IF EXISTS trigger_name @@ CREATE TRIGGER trigger_name BEFORE UPDATE ON table_name FOR EACH ROW BEGIN --the condition of error is: --if NEW update value of the attribute age = 1 and OLD value was 0 --key word OLD and NEW let you distinguish between the old and new value of an attribute IF (NEW.state = 1 AND OLD.state = 0) THEN signal sqlstate '-20000' set message_text = 'hey it's an error!'; END IF; END @@ DELIMITER ;
{ "language": "en", "url": "https://stackoverflow.com/questions/24", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "193" }
Q: How to use the C socket API in C++ on z/OS I'm having issues getting the C sockets API to work properly in C++ on z/OS. Although I am including sys/socket.h, I still get compile time errors telling me that AF_INET is not defined. Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated? I discovered that there is an #ifdef that I'm hitting. Apparently z/OS isn't happy unless I define which "type" of sockets I'm using with: #define _OE_SOCKETS Now, I personally have no idea what this _OE_SOCKETS is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works? Test App #include <sys/socket.h> int main() { return AF_INET; } Compile/Link Output: cxx -Wc,xplink -Wl,xplink -o inet_test inet.C "./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration. CCN0797(I) Compilation failed for file ./inet.C. Object file not created. A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any #ifdef statements. I have however noticed it contains the following: #ifdef __cplusplus extern "C" { #endif which encapsulates basically the whole file? Not sure if it matters. A: Keep a copy of the IBM manuals handy: * * z/OS V1R11.0 XL C/C++ Programming Guide * z/OS V1R11.0 XL C/C++ Run-Time Library Reference The IBM publications are generally very good, but you need to get used to their format, as well as knowing where to look for an answer. You'll find quite often that a feature that you want to use is guarded by a "feature test macro" You should ask your friendly system programmer to install the XL C/C++ Run-Time Library Reference: Man Pages on your system. Then you can do things like "man connect" to pull up the man page for the socket connect() API. When I do that, this is what I see: FORMAT X/Open #define _XOPEN_SOURCE_EXTENDED 1 #include <sys/socket.h> int connect(int socket, const struct sockaddr *address, socklen_t address_len); Berkeley Sockets #define _OE_SOCKETS #include <sys/types.h> #include <sys/socket.h> int connect(int socket, struct sockaddr *address, int address_len); A: I've had no trouble using the BSD sockets API in C++, in GNU/Linux. Here's the sample program I used: #include <sys/socket.h> int main() { return AF_INET; } So my take on this is that z/OS is probably the complicating factor here, however, because I've never used z/OS before, much less programmed in it, I can't say this definitively. :-P A: See the Using z/OS UNIX System Services sockets section in the z/OS XL C/C++ Programming Guide. Make sure you're including the necessary header files and using the appropriate #defines. The link to the doc has changed over the years, but you should be able to get to it easily enough by finding the current location of the Support & Downloads section on ibm.com and searching the documentation by title. A: So try #define _OE_SOCKETS before you include sys/socket.h A: The _OE_SOCKETS appears to be simply to enable/disable the definition of socket-related symbols. It is not uncommon in some libraries to have a bunch of macros to do that, to assure that you're not compiling/linking parts not needed. The macro is not standard in other sockets implementations, it appears to be something specific to z/OS. Take a look at this page: Compiling and Linking a z/VM C Sockets Program A: @Jax: The extern "C" thing matters, very very much. If a header file doesn't have one, then (unless it's a C++-only header file), you would have to enclose your #include with it: extern "C" { #include <sys/socket.h> // include other similarly non-compliant header files } Basically, anytime where a C++ program wants to link to C-based facilities, the extern "C" is vital. In practical terms, it means that the names used in external references will not be mangled, like normal C++ names would. Reference. A: You may want to take a look to cpp-sockets, a C++ wrapper for the sockets system calls. It works with many operating systems (Win32, POSIX, Linux, *BSD). I don't think it will work with z/OS but you can take a look at the include files it uses and you'll have many examples of tested code that works well on other OSs. A: DISCLAIMER: I am not a C++ programmer, however I know C really well. I adapated these calls from some C code I have. Also markdown put these strange _ as my underscores. You should just be able to write an abstraction class around the C sockets with something like this: class my_sock { private int sock; private int socket_type; private socklen_t sock_len; private struct sockaddr_in server_addr; public char *server_ip; public unsigned short server_port; }; Then have methods for opening, closing, and sending packets down the socket. For example, the open call might look something like this: int my_socket_connect() { int return_code = 0; if ( this->socket_type != CLIENT_SOCK ) { cout << "This is a not a client socket!\n"; return -1; } return_code = connect( this->local_sock, (struct sockaddr *) &this->server_addr, sizeof(this->server_addr)); if( return_code < 0 ) { cout << "Connect() failure! %s\n", strerror(errno); return return_code; } return return_code; } A: Use the following c89 flag: -D_OE_SOCKETS Example: bash-2.03$ c89 -D_OE_SOCKETS [filename].c For more information look for c89 Options in the z/OS XLC/C++ User's Guide.
{ "language": "en", "url": "https://stackoverflow.com/questions/25", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "175" }
Q: How to unload a ByteArray using Actionscript 3? How do I forcefully unload a ByteArray from memory using ActionScript 3? I have tried the following: // First non-working solution byteArray.length = 0; byteArray = new ByteArray(); // Second non-working solution for ( var i:int=0; i < byteArray.length; i++ ) { byteArray[i] = null; } A: I don't think you have anything to worry about. If System.totalMemory goes down you can relax. It may very well be the OS that doesn't reclaim the newly freed memory (in anticipation of the next time Flash Player will ask for more memory). Try doing something else that is very memory intensive and I'm sure that you'll notice that the memory allocated to Flash Player will decrease and be used for the other process instead. As I've understood it, memory management in modern OS's isn't intuitive from the perspective of looking at the amounts allocated to each process, or even the total amount allocated. When I've used my Mac for 5 minutes 95% of my 3 GB RAM is used, and it will stay that way, it never goes down. That's just the way the OS handles memory. As long as it's not needed elsewhere even processes that have quit still have memory assigned to them (this can make them launch quicker the next time, for example). A: (I'm not positive about this, but...) AS3 uses a non-deterministic garbage collection which means that dereferenced memory will be freed up whenever the runtime feels like it (typically not unless there's a reason to run, since it's an expensive operation to execute). This is the same approach used by most modern garbage collecting languages (like C# and Java as well). Assuming there are no other references to the memory pointed to by byteArray or the items within the array itself, the memory will be freed at some point after you exit the scope where byteArray is declared. You can force a garbage collection, though you really shouldn't. If you do, do it only for testing. If you do it in production, you'll hurt performance much more than help it. To force a GC, try (yes, twice): flash.system.System.gc(); flash.system.System.gc(); You can read more here. A: Have a look at this article http://www.gskinner.com/blog/archives/2006/06/as3_resource_ma.html IANA actionscript programmer, however the feeling I'm getting is that, because the garbage collector might not run when you want it to. Hence http://www.craftymind.com/2008/04/09/kick-starting-the-garbage-collector-in-actionscript-3-with-air/ So I'd recommend trying out their collection code and see if it helps private var gcCount:int; private function startGCCycle():void{ gcCount = 0; addEventListener(Event.ENTER_FRAME, doGC); } private function doGC(evt:Event):void{ flash.system.System.gc(); if(++gcCount > 1){ removeEventListener(Event.ENTER_FRAME, doGC); setTimeout(lastGC, 40); } } private function lastGC():void{ flash.system.System.gc(); } A: Unfortunately when it comes to memory management in Flash/actionscript there isn't a whole lot you can do. ActionScript was designed to be easy to use (so they didn't want people to have to worry about memory management) The following is a workaround, instead of creating a ByteArray variable try this. var byteObject:Object = new Object(); byteObject.byteArray = new ByteArray(); ... //Then when you are finished delete the variable from byteObject delete byteObject.byteArray; Where byteArray is a dynamic property of byteObject, you can free the memory that was allocated for it. A: I believe you have answered your own question. System.totalMemory gives you the total amount of memory being "used", not allocated. It is accurate that your application may only be using 20 MB, but it has 5 MB that is free for future allocations. I'm not sure whether the Adobe docs would shed light on the way that it manages memory. A: So, if I load say 20MB from MySQL, in the Task Manager the RAM for the application goes up by about 25MB. Then when I close the connection and try to dispose the ByteArray, the RAM never frees up. However, if I use System.totalMemory, flash player shows that the memory is being released, which is not the case. Is the flash player doing something like Java and reserving heap space and not releasing it until the app quits? Well yes and no, as you might have read from countless blog posts that the GC in AVM2 is optimistic and will work its own mysterious ways. So it does work a bit like Java and tries to reserve heap space. However if you let it long enough and start doing other operations that are consuming some significant memory, it will free that previous space. You can see this using the profiler overnight with some tests running on top of your app. A: So, if I load say 20MB from MySQL, in the Task Manager the RAM for the application goes up by about 25MB. Then when I close the connection and try to dispose the ByteArray, the RAM never frees up. However, if I use System.totalMemory, flash player shows that the memory is being released, which is not the case. The player is "releasing" the memory. If you minimize the window and restore it you should see that the memeory is now much closer to what System.totalMemory shows. You might also be interested in using FlexBuilder's profiling tools which can show you if you really have memory leaks. A: Use bytearray.clear() As per the Language Reference this Clears the contents of the byte array and resets the length and position properties to 0. Calling this method explicitly frees up the memory used by the ByteArray instance.
{ "language": "en", "url": "https://stackoverflow.com/questions/34", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: Check for changes to an SQL Server table? How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is .NET and C#. I'd like to be able to support any SQL Server 2000 SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation. By "changes to a table" I mean changes to table data, not changes to table structure. Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval. The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the BINARY_CHECKSUM function in T-SQL. The way I plan to implement is this: Every X seconds run the following query: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); And compare that against the stored value. If the value has changed, go through the table row by row using the query: SELECT row_id, BINARY_CHECKSUM(*) FROM sample_table WITH (NOLOCK); And compare the returned checksums against stored values. A: Check the last commit date. Every database has a history of when each commit is made. I believe its a standard of ACID compliance. A: Unfortunately CHECKSUM does not always work properly to detect changes. It is only a primitive checksum and no cyclic redundancy check (CRC) calculation. Therefore you can't use it to detect all changes, e. g. symmetrical changes result in the same CHECKSUM! E. g. the solution with CHECKSUM_AGG(BINARY_CHECKSUM(*)) will always deliver 0 for all 3 tables with different content: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM ( SELECT 1 as numA, 1 as numB UNION ALL SELECT 1 as numA, 1 as numB ) q -- delivers 0! SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM ( SELECT 1 as numA, 2 as numB UNION ALL SELECT 1 as numA, 2 as numB ) q -- delivers 0! SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM ( SELECT 0 as numA, 0 as numB UNION ALL SELECT 0 as numA, 0 as numB ) q -- delivers 0! A: Why don't you want to use triggers? They are a good thing if you use them correctly. If you use them as a way to enforce referential integrity that is when they go from good to bad. But if you use them for monitoring, they are not really considered taboo. A: How often do you need to check for changes and how large (in terms of row size) are the tables in the database? If you use the CHECKSUM_AGG(BINARY_CHECKSUM(*)) method suggested by John, it will scan every row of the specified table. The NOLOCK hint helps, but on a large database, you are still hitting every row. You will also need to store the checksum for every row so that you tell one has changed. Have you considered going at this from a different angle? If you do not want to modify the schema to add triggers, (which makes a sense, it's not your database), have you considered working with the application vendor that does make the database? They could implement an API that provides a mechanism for notifying accessory apps that data has changed. It could be as simple as writing to a notification table that lists what table and which row were modified. That could be implemented through triggers or application code. From your side, ti wouldn't matter, your only concern would be scanning the notification table on a periodic basis. The performance hit on the database would be far less than scanning every row for changes. The hard part would be convincing the application vendor to implement this feature. Since this can be handles entirely through SQL via triggers, you could do the bulk of the work for them by writing and testing the triggers and then bringing the code to the application vendor. By having the vendor support the triggers, it prevent the situation where your adding a trigger inadvertently replaces a trigger supplied by the vendor. A: Unfortunately, I do not think that there is a clean way to do this in SQL2000. If you narrow your requirements to SQL Server 2005 (and later), then you are in business. You can use the SQLDependency class in System.Data.SqlClient. See Query Notifications in SQL Server (ADO.NET). A: Have a DTS job (or a job that is started by a windows service) that runs at a given interval. Each time it is run, it gets information about the given table by using the system INFORMATION_SCHEMA tables, and records this data in the data repository. Compare the data returned regarding the structure of the table with the data returned the previous time. If it is different, then you know that the structure has changed. Example query to return information regarding all of the columns in table ABC (ideally listing out just the columns from the INFORMATION_SCHEMA table that you want, instead of using *select ** like I do here): select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'ABC' You would monitor different columns and INFORMATION_SCHEMA views depending on how exactly you define "changes to a table". A: Wild guess here: If you don't want to modify the third party's tables, Can you create a view and then put a trigger on that view? A: Take a look at the CHECKSUM command: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); That will return the same number each time it's run as long as the table contents haven't changed. See my post on this for more information: CHECKSUM Here's how I used it to rebuild cache dependencies when tables changed: ASP.NET 1.1 database cache dependency (without triggers)
{ "language": "en", "url": "https://stackoverflow.com/questions/36", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "153" }
Q: Reliable timer in a console application I am aware that in .NET there are three timer types (see Comparing the Timer Classes in the .NET Framework Class Library). I have chosen a threaded timer as the other types can drift if the main thread is busy, and I need this to be reliable. The way this timer works in the control of the timer is put on another thread so it can always tick along with the work begin completed on the parent thread when it is not busy. The issue with this timer in a console application is that while the timer is ticking along on another thread the main thread is not doing anything to the application closes. I tried adding a while true loop, but then the main thread is too busy when the timer does go off. A: You can use something like Console.ReadLine() to block the main thread, so other background threads (like timer threads) will still work. You may also use an AutoResetEvent to block the execution, then (when you need to) you can call Set() method on that AutoResetEvent object to release the main thread. Also ensure that your reference to Timer object doesn't go out of scope and garbage collected. A: Consider using a ManualResetEvent to block the main thread at the end of its processing, and call Reset() on it once the timer's processing has finished. If this is something that needs to run continuously, consider moving this into a service process instead of a console app. A: According to MSDN and the other answers, a minimal working example of a Console application using a System.Threading.Timer without exiting immediately : private static void Main() { using AutoResetEvent autoResetEvent = new AutoResetEvent(false); using Timer timer = new Timer(state => Console.WriteLine("One second has passed"), autoResetEvent, TimeSpan.Zero, new TimeSpan(0, 0, 1)); autoResetEvent.WaitOne(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/39", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Best way to allow plugins for a PHP application I am starting a new web application in PHP and this time around I want to create something that people can extend by using a plugin interface. How does one go about writing 'hooks' into their code so that plugins can attach to specific events? A: I am surprised that most of the answers here seem to be geared about plugins that are local to the web application, ie, plugins that run on the local web server. What about if you wanted the plugins to run on a different - remote - server? The best way to do this would be to provide a form that allows you to define different URLs that would be called when particular events occur in your application. Different events would send different information based on the event that just occurred. This way, you would just perform a cURL call to the URL that has been provided to your application (eg over https) where remote servers can perform tasks based on information that has been sent by your application. This provides two benefits: * *You don't have to host any code on your local server (security) *The code can be on remote servers (extensibility) in different languages other then PHP (portability) A: So let's say you don't want the Observer pattern because it requires that you change your class methods to handle the task of listening, and want something generic. And let's say you don't want to use extends inheritance because you may already be inheriting in your class from some other class. Wouldn't it be great to have a generic way to make any class pluggable without much effort? Here's how: <?php //////////////////// // PART 1 //////////////////// class Plugin { private $_RefObject; private $_Class = ''; public function __construct(&$RefObject) { $this->_Class = get_class(&$RefObject); $this->_RefObject = $RefObject; } public function __set($sProperty,$mixed) { $sPlugin = $this->_Class . '_' . $sProperty . '_setEvent'; if (is_callable($sPlugin)) { $mixed = call_user_func_array($sPlugin, $mixed); } $this->_RefObject->$sProperty = $mixed; } public function __get($sProperty) { $asItems = (array) $this->_RefObject; $mixed = $asItems[$sProperty]; $sPlugin = $this->_Class . '_' . $sProperty . '_getEvent'; if (is_callable($sPlugin)) { $mixed = call_user_func_array($sPlugin, $mixed); } return $mixed; } public function __call($sMethod,$mixed) { $sPlugin = $this->_Class . '_' . $sMethod . '_beforeEvent'; if (is_callable($sPlugin)) { $mixed = call_user_func_array($sPlugin, $mixed); } if ($mixed != 'BLOCK_EVENT') { call_user_func_array(array(&$this->_RefObject, $sMethod), $mixed); $sPlugin = $this->_Class . '_' . $sMethod . '_afterEvent'; if (is_callable($sPlugin)) { call_user_func_array($sPlugin, $mixed); } } } } //end class Plugin class Pluggable extends Plugin { } //end class Pluggable //////////////////// // PART 2 //////////////////// class Dog { public $Name = ''; public function bark(&$sHow) { echo "$sHow<br />\n"; } public function sayName() { echo "<br />\nMy Name is: " . $this->Name . "<br />\n"; } } //end class Dog $Dog = new Dog(); //////////////////// // PART 3 //////////////////// $PDog = new Pluggable($Dog); function Dog_bark_beforeEvent(&$mixed) { $mixed = 'Woof'; // Override saying 'meow' with 'Woof' //$mixed = 'BLOCK_EVENT'; // if you want to block the event return $mixed; } function Dog_bark_afterEvent(&$mixed) { echo $mixed; // show the override } function Dog_Name_setEvent(&$mixed) { $mixed = 'Coco'; // override 'Fido' with 'Coco' return $mixed; } function Dog_Name_getEvent(&$mixed) { $mixed = 'Different'; // override 'Coco' with 'Different' return $mixed; } //////////////////// // PART 4 //////////////////// $PDog->Name = 'Fido'; $PDog->Bark('meow'); $PDog->SayName(); echo 'My New Name is: ' . $PDog->Name; In Part 1, that's what you might include with a require_once() call at the top of your PHP script. It loads the classes to make something pluggable. In Part 2, that's where we load a class. Note I didn't have to do anything special to the class, which is significantly different than the Observer pattern. In Part 3, that's where we switch our class around into being "pluggable" (that is, supports plugins that let us override class methods and properties). So, for instance, if you have a web app, you might have a plugin registry, and you could activate plugins here. Notice also the Dog_bark_beforeEvent() function. If I set $mixed = 'BLOCK_EVENT' before the return statement, it will block the dog from barking and would also block the Dog_bark_afterEvent because there wouldn't be any event. In Part 4, that's the normal operation code, but notice that what you might think would run does not run like that at all. For instance, the dog does not announce it's name as 'Fido', but 'Coco'. The dog does not say 'meow', but 'Woof'. And when you want to look at the dog's name afterwards, you find it is 'Different' instead of 'Coco'. All those overrides were provided in Part 3. So how does this work? Well, let's rule out eval() (which everyone says is "evil") and rule out that it's not an Observer pattern. So, the way it works is the sneaky empty class called Pluggable, which does not contain the methods and properties used by the Dog class. Thus, since that occurs, the magic methods will engage for us. That's why in parts 3 and 4 we mess with the object derived from the Pluggable class, not the Dog class itself. Instead, we let the Plugin class do the "touching" on the Dog object for us. (If that's some kind of design pattern I don't know about -- please let me know.) A: The hook and listener method is the most commonly used, but there are other things you can do. Depending on the size of your app, and who your going to allow see the code (is this going to be a FOSS script, or something in house) will influence greatly how you want to allow plugins. kdeloach has a nice example, but his implementation and hook function is a little unsafe. I would ask for you to give more information of the nature of php app your writing, And how you see plugins fitting in. +1 to kdeloach from me. A: Here is an approach I've used, it's an attempt to copy from Qt signals/slots mechanism, a kind of Observer pattern. Objects can emit signals. Every signal has an ID in the system - it's composed by sender's id + object name Every signal can be binded to the receivers, which simply is a "callable" You use a bus class to pass the signals to anybody interested in receiving them When something happens, you "send" a signal. Below is and example implementation <?php class SignalsHandler { /** * hash of senders/signals to slots * * @var array */ private static $connections = array(); /** * current sender * * @var class|object */ private static $sender; /** * connects an object/signal with a slot * * @param class|object $sender * @param string $signal * @param callable $slot */ public static function connect($sender, $signal, $slot) { if (is_object($sender)) { self::$connections[spl_object_hash($sender)][$signal][] = $slot; } else { self::$connections[md5($sender)][$signal][] = $slot; } } /** * sends a signal, so all connected slots are called * * @param class|object $sender * @param string $signal * @param array $params */ public static function signal($sender, $signal, $params = array()) { self::$sender = $sender; if (is_object($sender)) { if ( ! isset(self::$connections[spl_object_hash($sender)][$signal])) { return; } foreach (self::$connections[spl_object_hash($sender)][$signal] as $slot) { call_user_func_array($slot, (array)$params); } } else { if ( ! isset(self::$connections[md5($sender)][$signal])) { return; } foreach (self::$connections[md5($sender)][$signal] as $slot) { call_user_func_array($slot, (array)$params); } } self::$sender = null; } /** * returns a current signal sender * * @return class|object */ public static function sender() { return self::$sender; } } class User { public function login() { /** * try to login */ if ( ! $logged ) { SignalsHandler::signal(this, 'loginFailed', 'login failed - username not valid' ); } } } class App { public static function onFailedLogin($message) { print $message; } } $user = new User(); SignalsHandler::connect($user, 'loginFailed', array($Log, 'writeLog')); SignalsHandler::connect($user, 'loginFailed', array('App', 'onFailedLogin')); $user->login(); ?> A: I believe the easiest way would be to follow Jeff's own advice and have a look around the existing code. Try looking at WordPress, Drupal, Joomla, and other well-known PHP-based CMS to see how their API hooks look and feel. This way you can even get ideas you may have not thought of previously to make things a little more robust. A more direct answer would be to write general files that they would "include_once" into their file that would provide the usability they would need. This would be broken up into categories and NOT provided in one MASSIVE "hooks.php" file. Be careful though, because what ends up happening is that files that they include end up having more and more dependencies and functionality improves. Try to keep API dependencies low. I.E fewer files for them to include. A: You could use an Observer pattern. A simple functional way to accomplish this: <?php /** Plugin system **/ $listeners = array(); /* Create an entry point for plugins */ function hook() { global $listeners; $num_args = func_num_args(); $args = func_get_args(); if($num_args < 2) trigger_error("Insufficient arguments", E_USER_ERROR); // Hook name should always be first argument $hook_name = array_shift($args); if(!isset($listeners[$hook_name])) return; // No plugins have registered this hook foreach($listeners[$hook_name] as $func) { $args = $func($args); } return $args; } /* Attach a function to a hook */ function add_listener($hook, $function_name) { global $listeners; $listeners[$hook][] = $function_name; } ///////////////////////// /** Sample Plugin **/ add_listener('a_b', 'my_plugin_func1'); add_listener('str', 'my_plugin_func2'); function my_plugin_func1($args) { return array(4, 5); } function my_plugin_func2($args) { return str_replace('sample', 'CRAZY', $args[0]); } ///////////////////////// /** Sample Application **/ $a = 1; $b = 2; list($a, $b) = hook('a_b', $a, $b); $str = "This is my sample application\n"; $str .= "$a + $b = ".($a+$b)."\n"; $str .= "$a * $b = ".($a*$b)."\n"; $str = hook('str', $str); echo $str; ?> Output: This is my CRAZY application 4 + 5 = 9 4 * 5 = 20 Notes: For this example source code, you must declare all your plugins before the actual source code that you want to be extendable. I've included an example of how to handle single or multiple values being passed to the plugin. The hardest part of this is writing the actual documentation which lists what arguments get passed to each hook. This is just one method of accomplishing a plugin system in PHP. There are better alternatives, I suggest you check out the WordPress Documentation for more information. A: There's a neat project called Stickleback by Matt Zandstra at Yahoo that handles much of the work for handling plugins in PHP. It enforces the interface of a plugin class, supports a command line interface and isn't too hard to get up and running - especially if you read the cover story about it in the PHP architect magazine. A: Good advice is to look how other projects have done it. Many call for having plugins installed and their "name" registered for services (like wordpress does) so you have "points" in your code where you call a function that identifies registered listeners and executes them. A standard OO design patter is the Observer Pattern, which would be a good option to implement in a truly object oriented PHP system. The Zend Framework makes use of many hooking methods, and is very nicely architected. That would be a good system to look at.
{ "language": "en", "url": "https://stackoverflow.com/questions/42", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "294" }
Q: Multiple submit buttons in an HTML form Let's say you create a wizard in an HTML form. One button goes back, and one goes forward. Since the back button appears first in the markup when you press Enter, it will use that button to submit the form. Example: <form> <!-- Put your cursor in this field and press Enter --> <input type="text" name="field1" /> <!-- This is the button that will submit --> <input type="submit" name="prev" value="Previous Page" /> <!-- But this is the button that I WANT to submit --> <input type="submit" name="next" value="Next Page" /> </form> I would like to get to decide which button is used to submit the form when a user presses Enter. That way, when you press Enter the wizard will move to the next page, not the previous. Do you have to use tabindex to do this? A: From https://html.spec.whatwg.org/multipage/forms.html#implicit-submission A form element's default button is the first submit button in tree order whose form owner is that form element. If the user agent supports letting the user submit a form implicitly (for example, on some platforms hitting the "enter" key while a text field is focused implicitly submits the form)... Having the next input be type="submit" and changing the previous input to type="button" should give the desired default behavior. <form> <input type="text" name="field1" /> <!-- put your cursor in this field and press Enter --> <input type="button" name="prev" value="Previous Page" /> <!-- This is the button that will submit --> <input type="submit" name="next" value="Next Page" /> <!-- But this is the button that I WANT to submit --> </form> A: This is what I have tried out: * *You need to make sure you give your buttons different names *Write an if statement that will do the required action if either button is clicked.   <form> <input type="text" name="field1" /> <!-- Put your cursor in this field and press Enter --> <input type="submit" name="prev" value="Previous Page" /> <!-- This is the button that will submit --> <input type="submit" name="next" value="Next Page" /> <!-- But this is the button that I WANT to submit --> </form> In PHP, if(isset($_POST['prev'])) { header("Location: previous.html"); die(); } if(isset($_POST['next'])) { header("Location: next.html"); die(); } A: <input type="submit" name="prev" value="Previous Page"> <input type="submit" name="prev" value="Next Page"> Keep the name of all submit buttons the same: "prev". The only difference is the value attribute with unique values. When we create the script, these unique values will help us to figure out which of the submit buttons was pressed. And write the following coding: btnID = "" if Request.Form("prev") = "Previous Page" then btnID = "1" else if Request.Form("prev") = "Next Page" then btnID = "2" end if A: I came across this question when trying to find an answer to basically the same thing, only with ASP.NET controls, when I figured out that the ASP button has a property called UseSubmitBehavior that allows you to set which one does the submitting. <asp:Button runat="server" ID="SumbitButton" UseSubmitBehavior="False" Text="Submit" /> Just in case someone is looking for the ASP.NET button way to do it. A: Change the previous button type into a button like this: <input type="button" name="prev" value="Previous Page" /> Now the Next button would be the default, plus you could also add the default attribute to it so that your browser will highlight it like so: <input type="submit" name="next" value="Next Page" default /> A: Give your submit buttons the same name like this: <input type="submit" name="submitButton" value="Previous Page" /> <input type="submit" name="submitButton" value="Next Page" /> When the user presses Enter and the request goes to the server, you can check the value for submitButton on your server-side code which contains a collection of form name/value pairs. For example, in ASP Classic: If Request.Form("submitButton") = "Previous Page" Then ' Code for the previous page ElseIf Request.Form("submitButton") = "Next Page" Then ' Code for the next page End If Reference: Using multiple submit buttons on a single form A: With JavaScript (here jQuery), you can disable the prev button before submitting the form. $('form').on('keypress', function(event) { if (event.which == 13) { $('input[name="prev"]').prop('type', 'button'); } }); A: If the fact that the first button is used by default is consistent across browsers, put them the right way around in the source code, and then use CSS to switch their apparent positions. float them left and right to switch them around visually, for example. A: Sometimes the provided solution by palotasb is not sufficient. There are use cases where for example a "Filter" submits button is placed above buttons like "Next and Previous". I found a workaround for this: copy the submit button which needs to act as the default submit button in a hidden div and place it inside the form above any other submit button. Technically it will be submitted by a different button when pressing Enter than when clicking on the visible Next button. But since the name and value are the same, there's no difference in the result. <html> <head> <style> div.defaultsubmitbutton { display: none; } </style> </head> <body> <form action="action" method="get"> <div class="defaultsubmitbutton"> <input type="submit" name="next" value="Next"> </div> <p><input type="text" name="filter"><input type="submit" value="Filter"></p> <p>Filtered results</p> <input type="radio" name="choice" value="1">Filtered result 1 <input type="radio" name="choice" value="2">Filtered result 2 <input type="radio" name="choice" value="3">Filtered result 3 <div> <input type="submit" name="prev" value="Prev"> <input type="submit" name="next" value="Next"> </div> </form> </body> </html> A: I solved a very similar problem in this way: * *If JavaScript is enabled (in most cases nowadays) then all the submit buttons are "degraded" to buttons at page load via JavaScript (jQuery). Click events on the "degraded" button typed buttons are also handled via JavaScript. *If JavaScript is not enabled then the form is served to the browser with multiple submit buttons. In this case hitting Enter on a textfield within the form will submit the form with the first button instead of the intended default, but at least the form is still usable: you can submit with both the prev and next buttons. Working example: <html> <head> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script> </head> <body> <form action="http://httpbin.org/post" method="post"> If JavaScript is disabled, then you CAN submit the form with button1, button2 or button3. If you press enter on a text field, then the form is submitted with the first submit button. If JavaScript is enabled, then the submit typed buttons without the 'defaultSubmitButton' style are converted to button typed buttons. If you press Enter on a text field, then the form is submitted with the only submit button (the one with class defaultSubmitButton) If you click on any other button in the form, then the form is submitted with that button's value. <br /> <input type="text" name="text1" ></input> <button type="submit" name="action" value="button1" >button 1</button> <br /> <input type="text" name="text2" ></input> <button type="submit" name="action" value="button2" >button 2</button> <br /> <input type="text" name="text3" ></input> <button class="defaultSubmitButton" type="submit" name="action" value="button3" >default button</button> </form> <script> $(document).ready(function(){ /* Change submit typed buttons without the 'defaultSubmitButton' style to button typed buttons */ $('form button[type=submit]').not('.defaultSubmitButton').each(function(){ $(this).attr('type', 'button'); }); /* Clicking on button typed buttons results in: 1. Setting the form's submit button's value to the clicked button's value, 2. Clicking on the form's submit button */ $('form button[type=button]').click(function( event ){ var form = event.target.closest('form'); var submit = $("button[type='submit']",form).first(); submit.val(event.target.value); submit.click(); }); }); </script> </body> </html> A: This cannot be done with pure HTML. You must rely on JavaScript for this trick. However, if you place two forms on the HTML page you can do this. Form1 would have the previous button. Form2 would have any user inputs + the next button. When the user presses Enter in Form2, the Next submit button would fire. A: I would use JavaScript to submit the form. The function would be triggered by the OnKeyPress event of the form element and would detect whether the Enter key was selected. If this is the case, it will submit the form. Here are two pages that give techniques on how to do this: 1, 2. Based on these, here is an example of usage (based on here): <SCRIPT TYPE="text/javascript">//<!-- function submitenter(myfield,e) { var keycode; if (window.event) { keycode = window.event.keyCode; } else if (e) { keycode = e.which; } else { return true; } if (keycode == 13) { myfield.form.submit(); return false; } else { return true; } } //--></SCRIPT> <INPUT NAME="MyText" TYPE="Text" onKeyPress="return submitenter(this,event)" /> A: If you really just want it to work like an install dialog, just give focus to the "Next" button OnLoad. That way if the user hits Return, the form submits and goes forward. If they want to go back they can hit Tab or click on the button. A: You can do it with CSS. Put the buttons in the markup with the Next button first, then the Prev button afterwards. Then use CSS to position them to appear the way you want. A: I'm just doing the trick of floating the buttons to the right. This way the Prev button is left of the Next button, but the Next comes first in the HTML structure: .f { float: right; } .clr { clear: both; } <form action="action" method="get"> <input type="text" name="abc"> <div id="buttons"> <input type="submit" class="f" name="next" value="Next"> <input type="submit" class="f" name="prev" value="Prev"> <div class="clr"></div><!-- This div prevents later elements from floating with the buttons. Keeps them 'inside' div#buttons --> </div> </form> Benefits over other suggestions: no JavaScript code, accessible, and both buttons remain type="submit". A: This works without JavaScript or CSS in most browsers: <form> <p><input type="text" name="field1" /></p> <p><a href="previous.html"> <button type="button">Previous Page</button></a> <button type="submit">Next Page</button></p> </form> Firefox, Opera, Safari, and Google Chrome all work. As always, Internet Explorer is the problem. This version works when JavaScript is turned on: <form> <p><input type="text" name="field1" /></p> <p><a href="previous.html"> <button type="button" onclick="window.location='previous.html'">Previous Page</button></a> <button type="submit">Next Page</button></p> </form> So the flaw in this solution is: Previous Page does not work if you use Internet Explorer with JavaScript off. Mind you, the back button still works! A: If you have multiple active buttons on one page then you can do something like this: Mark the first button you want to trigger on the Enter keypress as the default button on the form. For the second button, associate it to the Backspace button on the keyboard. The Backspace eventcode is 8. $(document).on("keydown", function(event) { if (event.which.toString() == "8") { var findActiveElementsClosestForm = $(document.activeElement).closest("form"); if (findActiveElementsClosestForm && findActiveElementsClosestForm.length) { $("form#" + findActiveElementsClosestForm[0].id + " .secondary_button").trigger("click"); } } }); <script src="https://ajax.aspnetcdn.com/ajax/jQuery/jquery-3.2.1.min.js"></script> <form action="action" method="get" defaultbutton="TriggerOnEnter"> <input type="submit" id="PreviousButton" name="prev" value="Prev" class="secondary_button" /> <input type="submit" id='TriggerOnEnter' name="next" value="Next" class="primary_button" /> </form> A: Changing the tab order should be all it takes to accomplish this. Keep it simple. Another simple option would be to put the back button after the submit button in the HTML code but float it to the left so it appears on the page before the submit button. A: Another simple option would be to put the back button after the submit button in the HTML code, but float it to the left, so it appears on the page before the submit button. Changing the tab order should be all it takes to accomplish this. Keep it simple. A: The first time I came up against this, I came up with an onclick()/JavaScript hack when choices are not prev/next that I still like for its simplicity. It goes like this: @model myApp.Models.myModel <script type="text/javascript"> function doOperation(op) { document.getElementById("OperationId").innerText = op; // you could also use Ajax to reference the element. } </script> <form> <input type="text" id = "TextFieldId" name="TextField" value="" /> <input type="hidden" id="OperationId" name="Operation" value="" /> <input type="submit" name="write" value="Write" onclick='doOperation("Write")'/> <input type="submit" name="read" value="Read" onclick='doOperation("Read")'/> </form> When either submit button is clicked, it stores the desired operation in a hidden field (which is a string field included in the model the form is associated with) and submits the form to the Controller, which does all the deciding. In the Controller, you simply write: // Do operation according to which submit button was clicked // based on the contents of the hidden Operation field. if (myModel.Operation == "Read") { // Do read logic } else if (myModel.Operation == "Write") { // Do write logic } else { // Do error logic } You can also tighten this up slightly using numeric operation codes to avoid the string parsing, but unless you play with enumerations, the code is less readable, modifiable, and self-documenting and the parsing is trivial, anyway. A: You can use Tabindex to solve this issue. Also changing the order of the buttons would be a more efficient way to achieve this. Change the order of the buttons and add float values to assign them the desired position you want to show in your HTML view. A: A maybe somewhat more modern approach over the CSS float method could be a solution using flexbox with the order property on the flex items. It could be something along those lines: <div style="display: flex"> <input type="submit" name="next" value="Next Page" style="order: 1" /> <input type="submit" name="prev" value="Previous Page" style="order: 0" /> </div> Of course it depends on your document structure whether this is a feasible approach or not, but I find flex items much easier to control than floating elements. A: Instead of struggling with multiple submits, JavaScript or anything like that to do some previous/next stuff, an alternative would be to use a carousel to simulate the different pages. Doing this : * *You don't need multiple buttons, inputs or submits to do the previous/next thing, you have only one input type="submit" in only one form. *The values in the whole form are there until the form is submitted. *The user can go to any previous page and any next page flawlessly to modify the values. Example using Bootstrap 5.0.0 : <div id="carousel" class="carousel slide" data-ride="carousel"> <form action="index.php" method="post" class="carousel-inner"> <div class="carousel-item active"> <input type="text" name="lastname" placeholder="Lastname"/> </div> <div class="carousel-item"> <input type="text" name="firstname" placeholder="Firstname"/> </div> <div class="carousel-item"> <input type="submit" name="submit" value="Submit"/> </div> </form> <a class="btn-secondary" href="#carousel" role="button" data-slide="prev">Previous page</a> <a class="btn-primary" href="#carousel" role="button" data-slide="next">Next page</a> </div> A: I think this is an easy solution for this. Change the Previous button type to button, and add a new onclick attribute to the button with value jQuery(this).attr('type','submit');. So, when the user clicks on the Previous button then its type will be changed to submit and the form will be submitted with the Previous button. <form> <!-- Put your cursor in this field and press Enter --> <input type="text" name="field1" /> <!-- This is the button that will submit --> <input type="button" onclick="jQuery(this).attr('type','submit');" name="prev" value="Previous Page" /> <!-- But this is the button that I WANT to submit --> <input type="submit" name="next" value="Next Page" /> </form> A: Problem A form may have several submit buttons. When pressing return in any input, the first submit button is used by the browser. However, sometimes we want to use a different/later button as default. Options * *Add a hidden submit button with the same action first (☹️ duplication) *Put the desired submit button first in the form and then move it to the correct place via CSS (☹️ may not be feasible, may result in cumbersome styling) *Change the handling of the return key in all form inputs via JavaScript (☹️ needs javascript) None of the options is ideal, so we choose 3. because most browsers have JavaScript enabled. Chosen solution // example implementation document.addEventListener('DOMContentLoaded', (ev) => { for (const defaultSubmitInput of document.querySelectorAll('[data-default-submit]')) { for (const formInput of defaultSubmitInput.form.querySelectorAll('input')) { if (formInput.dataset.ignoreDefaultSubmit != undefined) { continue; } formInput.addEventListener('keypress', (ev) => { if (ev.keyCode == 13) { ev.preventDefault(); defaultSubmitInput.click(); } }) } } }); <!-- example markup --> <form action="https://postman-echo.com/get" method="get"> <input type="text" name="field1"> <input type="submit" name="submit" value="other action"> <input type="submit" name="submit" value="default action" data-default-submit> <!-- this button will be used on return --> </form> It may be useful to be able to remove the enhancement from some inputs. This can be achieved by: <input type="text" name="field2" data-ignore-default-submit> <!-- uses browser standard behaviour --> Here a complete code pen. A: Using the example you gave: <form> <input type="text" name="field1" /><!-- Put your cursor in this field and press Enter --> <input type="submit" name="prev" value="Previous Page" /> <!-- This is the button that will submit --> <input type="submit" name="next" value="Next Page" /> <!-- But this is the button that I WANT to submit --> </form> If you click on "Previous Page", only the value of "prev" will be submitted. If you click on "Next Page" only the value of "next" will be submitted. If however, you press Enter somewhere on the form, neither "prev" nor "next" will be submitted. So using pseudocode you could do the following: If "prev" submitted then Previous Page was click Else If "next" submitted then Next Page was click Else No button was click A: When a button is clicked with a mouse (and hopefully by touch), it records the X,Y coordinates. This is not the case when it is invoked by a form, and these values are normally zero. So you can do something like this: function(e) { const isArtificial = e.screenX === 0 && e.screenY === 0 && e.x === 0 && e.y === 0 && e.clientX === 0 && e.clientY === 0; if (isArtificial) { return; // DO NOTHING } else { // OPTIONAL: Don't submit the form when clicked // e.preventDefault(); // e.stopPropagation(); } // ...Natural code goes here }
{ "language": "en", "url": "https://stackoverflow.com/questions/48", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "286" }
Q: How do I get a distinct, ordered list of names from a DataTable using LINQ? I have a DataTable with a Name column. I want to generate a collection of the unique names ordered alphabetically. The following query ignores the order by clause. var names = (from DataRow dr in dataTable.Rows orderby (string)dr["Name"] select (string)dr["Name"]).Distinct(); Why does the orderby not get enforced? A: Try out the following: dataTable.Rows.Cast<DataRow>().select(dr => dr["Name"].ToString()).Distinct().OrderBy(name => name); A: The problem is that the Distinct operator does not grant that it will maintain the original order of values. So your query will need to work like this var names = (from DataRow dr in dataTable.Rows select (string)dr["Name"]).Distinct().OrderBy( name => name ); A: To make it more readable and maintainable, you can also split it up into multiple LINQ statements. * *First, select your data into a new list, let's call it x1, do a projection if desired *Next, create a distinct list, from x1 into x2, using whatever distinction you require *Finally, create an ordered list, from x2 into x3, sorting by whatever you desire A: Try the following var names = (from dr in dataTable.Rows select (string)dr["Name"]).Distinct().OrderBy(name => name); this should work for what you need. A: To abstract: all of the answers have something in common. OrderBy needs to be the final operation. A: You can use something like that: dataTable.Rows.Cast<DataRow>().GroupBy(g => g["Name"]).Select(s => s.First()).OrderBy(o => o["Name"]); A: var sortedTable = (from results in resultTable.AsEnumerable() select (string)results[attributeList]).Distinct().OrderBy(name => name);
{ "language": "en", "url": "https://stackoverflow.com/questions/59", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "117" }
Q: Microsoft Office 2007 file type, Mime types and identifying characters Where can I find a list of all of the MIME types and the identifying characters for Microsoft Office 2007 files? I have an upload form that is restricting uploads based on the extensions and identifying characters, but I cannot seem to find the Office 2007 MIME types. Can anyone help? A: Office 2007 MIME Types for IIS * *.docm, application/vnd.ms-word.document.macroEnabled.12 *.docx, application/vnd.openxmlformats-officedocument.wordprocessingml.document *.dotm, application/vnd.ms-word.template.macroEnabled.12 *.dotx, application/vnd.openxmlformats-officedocument.wordprocessingml.template *.potm, application/vnd.ms-powerpoint.template.macroEnabled.12 *.potx, application/vnd.openxmlformats-officedocument.presentationml.template *.ppam, application/vnd.ms-powerpoint.addin.macroEnabled.12 *.ppsm, application/vnd.ms-powerpoint.slideshow.macroEnabled.12 *.ppsx, application/vnd.openxmlformats-officedocument.presentationml.slideshow *.pptm, application/vnd.ms-powerpoint.presentation.macroEnabled.12 *.pptx, application/vnd.openxmlformats-officedocument.presentationml.presentation *.xlam, application/vnd.ms-excel.addin.macroEnabled.12 *.xlsb, application/vnd.ms-excel.sheet.binary.macroEnabled.12 *.xlsm, application/vnd.ms-excel.sheet.macroEnabled.12 *.xlsx, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet *.xltm, application/vnd.ms-excel.template.macroEnabled.12 *.xltx, application/vnd.openxmlformats-officedocument.spreadsheetml.template
{ "language": "en", "url": "https://stackoverflow.com/questions/61", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Paging a collection with LINQ How do you page through a collection in LINQ given that you have a startIndex and a count? A: It is very simple with the Skip and Take extension methods. var query = from i in ideas select i; var paggedCollection = query.Skip(startIndex).Take(count); A: A few months back I wrote a blog post about Fluent Interfaces and LINQ which used an Extension Method on IQueryable<T> and another class to provide the following natural way of paginating a LINQ collection. var query = from i in ideas select i; var pagedCollection = query.InPagesOf(10); var pageOfIdeas = pagedCollection.Page(2); You can get the code from the MSDN Code Gallery Page: Pipelines, Filters, Fluent API and LINQ to SQL. A: I solved this a bit differently than what the others have as I had to make my own paginator, with a repeater. So I first made a collection of page numbers for the collection of items that I have: // assumes that the item collection is "myItems" int pageCount = (myItems.Count + PageSize - 1) / PageSize; IEnumerable<int> pageRange = Enumerable.Range(1, pageCount); // pageRange contains [1, 2, ... , pageCount] Using this I could easily partition the item collection into a collection of "pages". A page in this case is just a collection of items (IEnumerable<Item>). This is how you can do it using Skip and Take together with selecting the index from the pageRange created above: IEnumerable<IEnumerable<Item>> pageRange .Select((page, index) => myItems .Skip(index*PageSize) .Take(PageSize)); Of course you have to handle each page as an additional collection but e.g. if you're nesting repeaters then this is actually easy to handle. The one-liner TLDR version would be this: var pages = Enumerable .Range(0, pageCount) .Select((index) => myItems.Skip(index*PageSize).Take(PageSize)); Which can be used as this: for (Enumerable<Item> page : pages) { // handle page for (Item item : page) { // handle item in page } } A: This question is somewhat old, but I wanted to post my paging algorithm that shows the whole procedure (including user interaction). const int pageSize = 10; const int count = 100; const int startIndex = 20; int took = 0; bool getNextPage; var page = ideas.Skip(startIndex); do { Console.WriteLine("Page {0}:", (took / pageSize) + 1); foreach (var idea in page.Take(pageSize)) { Console.WriteLine(idea); } took += pageSize; if (took < count) { Console.WriteLine("Next page (y/n)?"); char answer = Console.ReadLine().FirstOrDefault(); getNextPage = default(char) != answer && 'y' == char.ToLowerInvariant(answer); if (getNextPage) { page = page.Skip(pageSize); } } } while (getNextPage && took < count); However, if you are after performance, and in production code, we're all after performance, you shouldn't use LINQ's paging as shown above, but rather the underlying IEnumerator to implement paging yourself. As a matter of fact, it is as simple as the LINQ-algorithm shown above, but more performant: const int pageSize = 10; const int count = 100; const int startIndex = 20; int took = 0; bool getNextPage = true; using (var page = ideas.Skip(startIndex).GetEnumerator()) { do { Console.WriteLine("Page {0}:", (took / pageSize) + 1); int currentPageItemNo = 0; while (currentPageItemNo++ < pageSize && page.MoveNext()) { var idea = page.Current; Console.WriteLine(idea); } took += pageSize; if (took < count) { Console.WriteLine("Next page (y/n)?"); char answer = Console.ReadLine().FirstOrDefault(); getNextPage = default(char) != answer && 'y' == char.ToLowerInvariant(answer); } } while (getNextPage && took < count); } Explanation: The downside of using Skip() for multiple times in a "cascading manner" is, that it will not really store the "pointer" of the iteration, where it was last skipped. - Instead the original sequence will be front-loaded with skip calls, which will lead to "consuming" the already "consumed" pages over and over again. - You can prove that yourself, when you create the sequence ideas so that it yields side effects. -> Even if you have skipped 10-20 and 20-30 and want to process 40+, you'll see all side effects of 10-30 being executed again, before you start iterating 40+. The variant using IEnumerable's interface directly, will instead remember the position of the end of the last logical page, so no explicit skipping is needed and side effects won't be repeated.
{ "language": "en", "url": "https://stackoverflow.com/questions/66", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: How do I add existing comments to RDoc in Ruby? I want to format my existing comments as 'RDoc comments' so they can be viewed using ri. What are some recommended resources for starting out using RDoc? A: A few things that have bitten me: * *:main: -- RDoc uses only the last one evaluated; best to make sure there's only one in your project and you don't also use the --main command-line argument. *same as previous, but for :title: *:section: doesn't work very well A: RDoc uses SimpleMarkup so it's fairly simple to create lists, etc. using *, - or a number. It also treats lines that are indented at the same column number as part of the same paragraph until there is an empty line which signifies a new paragraph. Do you have a few examples of comments you want RDoc'ed so we could show you how to do them and then you could extrapolate that for the rest of your comments?
{ "language": "en", "url": "https://stackoverflow.com/questions/72", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Getting Subclipse in Aptana to work with the newest release of Subversion The version of Subclipse (1.2.4) currently available through Aptana's automatic Plugins Manager does not work with the newest version of Subversion. I see on the Subclipse website however that they have 1.4.2 out for Eclipse. So I added a new remote update site to my Update manager. When I tried to install it, it told me I needed Mylyn 3.0.0. So after much searching I found Mylyn 3.0.0 and added another new remote update site to my update manager. Then when I tried to install that, it told me I needed org.eclipse.ui 3.3.0 or equivalent. Looking at the configuration details for Aptana, it looks like it is built against eclipse 3.2.2. Does anyone know if there is a way to upgrade the version of Eclipse Aptana that is built against to 3.3.0? Or if there is some other way to get Subclipse to work with the very newest version of Subversion? I know this isn't necessarily a "programming" question, but I hope it's ok since it's highly relevant to the programming experience. A: I've had problems with JavaHL in Eclipse Ganymede, when it worked fine in Eclipse Europa. I'm not sure how Aptana is different, but try either upgrading JavaHL or switching to the pure-java SVNKit implementation within the Subclipse config. A: if you're not going to be using mylyn just uncheck that dependency. I'm not really familiar with Aptana, but in eclipse you can expand whats being installed and uncheck anything you don't need. A: I used the update url and I installed the JavaHL adapter, the Subclipse project itself and the SVNKit adapter BETA. After this it worked fine for me, this is for linux platform hope it works for you. A: Subclipse does not require Mylyn, but the update site includes a plugin that integrates Mylyn and Subclipse. This is intended for people that use Mylyn. In your case, you would want to just de-select Mylyn in the update dialog. Subclipse also requires Subversion 1.5 and the corresponding version of the JavaHL native libraries. I have written the start of an FAQ to help people understand JavaHL and how to get it. See: http://desktop-eclipse.open.collab.net/wiki/JavaHL
{ "language": "en", "url": "https://stackoverflow.com/questions/79", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: SQLStatement.execute() - multiple queries in one statement I've written a database generation script in SQL and want to execute it in my Adobe AIR application: Create Table tRole ( roleID integer Primary Key ,roleName varchar(40) ); Create Table tFile ( fileID integer Primary Key ,fileName varchar(50) ,fileDescription varchar(500) ,thumbnailID integer ,fileFormatID integer ,categoryID integer ,isFavorite boolean ,dateAdded date ,globalAccessCount integer ,lastAccessTime date ,downloadComplete boolean ,isNew boolean ,isSpotlight boolean ,duration varchar(30) ); Create Table tCategory ( categoryID integer Primary Key ,categoryName varchar(50) ,parent_categoryID integer ); ... I execute this in Adobe AIR using the following methods: public static function RunSqlFromFile(fileName:String):void { var file:File = File.applicationDirectory.resolvePath(fileName); var stream:FileStream = new FileStream(); stream.open(file, FileMode.READ) var strSql:String = stream.readUTFBytes(stream.bytesAvailable); NonQuery(strSql); } public static function NonQuery(strSQL:String):void { var sqlConnection:SQLConnection = new SQLConnection(); sqlConnection.open(File.applicationStorageDirectory.resolvePath(DBPATH)); var sqlStatement:SQLStatement = new SQLStatement(); sqlStatement.text = strSQL; sqlStatement.sqlConnection = sqlConnection; try { sqlStatement.execute(); } catch (error:SQLError) { Alert.show(error.toString()); } } No errors are generated, however only tRole exists. It seems that it only looks at the first query (up to the semicolon- if I remove it, the query fails). Is there a way to call multiple queries in one statement? A: What about making your delimiter something a little more complex like ";\n" which would not show up all that often. You just have to ensure when creating the file you have a line return or two in there. I end up putting two "\n\n" into the creation of my files which works well. A: I wound up using this. It is a kind of a hack, but it actually works pretty well. The only thing is you have to be very careful with your semicolons. : D var strSql:String = stream.readUTFBytes(stream.bytesAvailable); var i:Number = 0; var strSqlSplit:Array = strSql.split(";"); for (i = 0; i < strSqlSplit.length; i++){ NonQuery(strSqlSplit[i].toString()); } A: The SQLite API has a function called something like sqlite_prepare which takes one statement and prepares it for execution, essentially parsing the SQL and storing it in memory. This means that the SQL only has to be sent once to the database engine even though the statement is executed many times. Anyway, a statement is a single SQL query, that's just the rule. The AIR SQL API doesn't allow sending raw SQL to SQLite, only single statements, and the reason is, likely, that AIR uses the sqlite_prepare function when it talks to SQLite.
{ "language": "en", "url": "https://stackoverflow.com/questions/80", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Flat file databases What are the best practices around creating flat file database structures in PHP? A lot of more matured PHP flat file frameworks out there which I attempt to implement SQL-like query syntax which is over the top for my purposes in most cases. (I would just use a database at that point). Are there any elegant tricks out there to get good performance and features with a small code overhead? A: Here's the code we use for Lilina: <?php /** * Handler for persistent data files * * @author Ryan McCue <cubegames@gmail.com> * @package Lilina * @version 1.0 * @license http://opensource.org/licenses/gpl-license.php GNU Public License */ /** * Handler for persistent data files * * @package Lilina */ class DataHandler { /** * Directory to store data. * * @since 1.0 * * @var string */ protected $directory; /** * Constructor, duh. * * @since 1.0 * @uses $directory Holds the data directory, which the constructor sets. * * @param string $directory */ public function __construct($directory = null) { if ($directory === null) $directory = get_data_dir(); if (substr($directory, -1) != '/') $directory .= '/'; $this->directory = (string) $directory; } /** * Prepares filename and content for saving * * @since 1.0 * @uses $directory * @uses put() * * @param string $filename Filename to save to * @param string $content Content to save to cache */ public function save($filename, $content) { $file = $this->directory . $filename; if(!$this->put($file, $content)) { trigger_error(get_class($this) . " error: Couldn't write to $file", E_USER_WARNING); return false; } return true; } /** * Saves data to file * * @since 1.0 * @uses $directory * * @param string $file Filename to save to * @param string $data Data to save into $file */ protected function put($file, $data, $mode = false) { if(file_exists($file) && file_get_contents($file) === $data) { touch($file); return true; } if(!$fp = @fopen($file, 'wb')) { return false; } fwrite($fp, $data); fclose($fp); $this->chmod($file, $mode); return true; } /** * Change the file permissions * * @since 1.0 * * @param string $file Absolute path to file * @param integer $mode Octal mode */ protected function chmod($file, $mode = false){ if(!$mode) $mode = 0644; return @chmod($file, $mode); } /** * Returns the content of the cached file if it is still valid * * @since 1.0 * @uses $directory * @uses check() Check if cache file is still valid * * @param string $id Unique ID for content type, used to distinguish between different caches * @return null|string Content of the cached file if valid, otherwise null */ public function load($filename) { return $this->get($this->directory . $filename); } /** * Returns the content of the file * * @since 1.0 * @uses $directory * @uses check() Check if file is valid * * @param string $id Filename to load data from * @return bool|string Content of the file if valid, otherwise null */ protected function get($filename) { if(!$this->check($filename)) return null; return file_get_contents($filename); } /** * Check a file for validity * * Basically just a fancy alias for file_exists(), made primarily to be * overriden. * * @since 1.0 * @uses $directory * * @param string $id Unique ID for content type, used to distinguish between different caches * @return bool False if the cache doesn't exist or is invalid, otherwise true */ protected function check($filename){ return file_exists($filename); } /** * Delete a file * * @param string $filename Unique ID */ public function delete($filename) { return unlink($this->directory . $filename); } } ?> It stores each entry as a separate file, which we found is efficient enough for use (no unneeded data is loaded and it's faster to save). A: IMHO, you have two... er, three options if you want to avoid homebrewing something: * *SQLite If you're familiar with PDO, you can install a PDO driver that supports SQLite. Never used it, but I have used PDO a ton with MySQL. I'm going to give this a shot on a current project. *XML Done this many times for relatively small amounts of data. XMLReader is a lightweight, read-forward, cursor-style class. SimpleXML makes it simple to read an XML document into an object that you can access just like any other class instance. *JSON (update) Good option for smallish amounts of data, just read/write file and json_decode/json_encode. Not sure if PHP offers a structure to navigate a JSON tree without loading it all in memory though. A: Well, what is the nature of the flat databases. Are they large or small. Is it simple arrays with arrays in them? if its something simple say userprofiles built as such: $user = array("name" => "bob", "age" => 20, "websites" => array("example.com","bob.example.com","bob2.example.com"), "and_one" => "more"); and to save or update the db record for that user. $dir = "../userdata/"; //make sure to put it bellow what the server can reach. file_put_contents($dir.$user['name'],serialize($user)); and to load the record for the user function &get_user($name){ return unserialize(file_get_contents("../userdata/".$name)); } but again this implementation will vary on the application and nature of the database you need. A: If you're going to use a flat file to persist data, use XML to structure the data. PHP has a built-in XML parser. A: If you want a human-readable result, you can also use this type of file : ofaurax|27|male|something| another|24|unknown|| ... This way, you have only one file, you can debug it (and manually fix) easily, you can add fields later (at the end of each line) and the PHP code is simple (for each line, split according to |). However, the drawbacks is that you should parse the entire file to search something (if you have millions of entry, it's not fine) and you should handle the separator in data (for example if the nick is WaR|ordz). A: I have written two simple functions designed to store data in a file. You can judge for yourself if it's useful in this case. The point is to save a php variable (if it's either an array a string or an object) to a file. <?php function varname(&$var) { $oldvalue=$var; $var='AAAAB3NzaC1yc2EAAAABIwAAAQEAqytmUAQKMOj24lAjqKJC2Gyqhbhb+DmB9eDDb8+QcFI+QOySUpYDn884rgKB6EAtoFyOZVMA6HlNj0VxMKAGE+sLTJ40rLTcieGRCeHJ/TI37e66OrjxgB+7tngKdvoG5EF9hnoGc4eTMpVUDdpAK3ykqR1FIclgk0whV7cEn/6K4697zgwwb5R2yva/zuTX+xKRqcZvyaF3Ur0Q8T+gvrAX8ktmpE18MjnA5JuGuZFZGFzQbvzCVdN52nu8i003GEFmzp0Ny57pWClKkAy3Q5P5AR2BCUwk8V0iEX3iu7J+b9pv4LRZBQkDujaAtSiAaeG2cjfzL9xIgWPf+J05IQ=='; foreach($GLOBALS as $var_name => $value) { if ($value === 'AAAAB3NzaC1yc2EAAAABIwAAAQEAqytmUAQKMOj24lAjqKJC2Gyqhbhb+DmB9eDDb8+QcFI+QOySUpYDn884rgKB6EAtoFyOZVMA6HlNj0VxMKAGE+sLTJ40rLTcieGRCeHJ/TI37e66OrjxgB+7tngKdvoG5EF9hnoGc4eTMpVUDdpAK3ykqR1FIclgk0whV7cEn/6K4697zgwwb5R2yva/zuTX+xKRqcZvyaF3Ur0Q8T+gvrAX8ktmpE18MjnA5JuGuZFZGFzQbvzCVdN52nu8i003GEFmzp0Ny57pWClKkAy3Q5P5AR2BCUwk8V0iEX3iu7J+b9pv4LRZBQkDujaAtSiAaeG2cjfzL9xIgWPf+J05IQ==') { $var=$oldvalue; return $var_name; } } $var=$oldvalue; return false; } function putphp(&$var, $file=false) { $varname=varname($var); if(!$file) { $file=$varname.'.php'; } $pathinfo=pathinfo($file); if(file_exists($file)) { if(is_dir($file)) { $file=$pathinfo['dirname'].'/'.$pathinfo['basename'].'/'.$varname.'.php'; } } file_put_contents($file,'<?php'."\n\$".$varname.'='.var_export($var, true).";\n"); return true; } A: This one is inspiring as a practical solution: https://github.com/mhgolkar/FlatFire It uses multiple strategies to handling data... [Copied from Readme File] Free or Structured or Mixed - STRUCTURED Regular (table, row, column) format. [DATABASE] / \ TX TableY \_____________________________ |ROW_0 Colum_0 Colum_1 Colum_2| |ROW_1 Colum_0 Colum_1 Colum_2| |_____________________________| - FREE More creative data storing. You can store data in any structure you want for each (free) element, its similar to storing an array with a unique "Id". [DATABASE] / \ EX ElementY (ID) \________________ |Field_0 Value_0 | |Field_1 Value_1 | |Field_2 Value_2 | |________________| recall [ID]: get_free("ElementY") --> array([Field_0]=>Value_0,[Field_1]=>Value_1... - MIXD (Mixed) Mixed databases can store both free elements and tables.If you add a table to a free db or a free element to a structured db, flat fire will automatically convert FREE or SRCT to MIXD database. [DATABASE] / \ EX TY A: You might consider SQLite. It's almost as simple as flat files, but you do get a SQL engine for querying. It works well with PHP too. A: Just pointing out a potential problem with a flat file database with this type of system: data|some text|more data row 2 data|bla hbalh|more data ...etc The problem is that the cell data contains a "|" or a "\n" then the data will be lost. Sometimes it would be easier to split by combinations of letters that most people wouldn't use. For example: Column splitter: #$% (Shift+345) Row splitter: ^&* (Shift+678) Text file: test data#$%blah blah#$%^&*new row#$%new row data 2 Then use: explode("#$%", $data); use foreach, the explode again to separate columns Or anything along these lines. Also, I might add that flat file databases are good for systems with small amounts of data (ie. less than 20 rows), but become huge memory hogs for larger databases. A: In my opinion, using a "Flat File Database" in the sense you're meaning (and the answer you've accepted) isn't necessarily the best way to go about things. First of all, using serialize() and unserialize() can cause MAJOR headaches if someone gets in and edits the file (they can, in fact, put arbitrary code in your "database" to be run each time.) Personally, I'd say - why not look to the future? There have been so many times that I've had issues because I've been creating my own "proprietary" files, and the project has exploded to a point where it needs a database, and I'm thinking "you know, I wish I'd written this for a database to start with" - because the refactoring of the code takes way too much time and effort. From this I've learnt that future proofing my application so that when it gets bigger I don't have to go and spend days refactoring is the way to go forward. How do I do this? SQLite. It works as a database, uses SQL, and is pretty easy to change over to MySQL (especially if you're using abstracted classes for database manipulation like I do!) In fact, especially with the "accepted answer"'s method, it can drastically cut the memory usage of your app (you don't have to load all the "RECORDS" into PHP) A: One framework I'm considering would be for a blogging platform. Since just about any possible view of data you would want would be sorted by date, I was thinking about this structure: One directory per content node: ./content/YYYYMMDDHHMMSS/ Subdirectories of each node including /tags /authors /comments As well as simple text files in the node directory for pre- and post-rendered content and the like. This would allow a simple PHP glob() call (and probably a reversal of the result array) to query on just about anything within the content structure: glob("content/*/tags/funny"); Would return paths including all articles tagged "funny".
{ "language": "en", "url": "https://stackoverflow.com/questions/85", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: Is gettimeofday() guaranteed to be of microsecond resolution? I am porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux). I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up: BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount) { gettimeofday(&currentTimeVal, NULL); performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec); performanceCount->QuadPart *= (1000 * 1000); performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec); return true; } This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, giving me a 64-bit variable that contains uSeconds since the program's start-up. So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however. A: The actual resolution of gettimeofday() depends on the hardware architecture. Intel processors as well as SPARC machines offer high resolution timers that measure microseconds. Other hardware architectures fall back to the system’s timer, which is typically set to 100 Hz. In such cases, the time resolution will be less accurate. I obtained this answer from High Resolution Time Measurement and Timers, Part I A: So it says microseconds explicitly, but says the resolution of the system clock is unspecified. I suppose resolution in this context means how the smallest amount it will ever be incremented? The data structure is defined as having microseconds as a unit of measurement, but that doesn't mean that the clock or operating system is actually capable of measuring that finely. Like other people have suggested, gettimeofday() is bad because setting the time can cause clock skew and throw off your calculation. clock_gettime(CLOCK_MONOTONIC) is what you want, and clock_getres() will tell you the precision of your clock. A: Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no. You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings. Also, look into the clock_getres() function. A: This answer mentions problems with the clock being adjusted. Both your problems guaranteeing tick units and the problems with the time being adjusted are solved in C++11 with the <chrono> library. The clock std::chrono::steady_clock is guaranteed not to be adjusted, and furthermore it will advance at a constant rate relative to real time, so technologies like SpeedStep must not affect it. You can get typesafe units by converting to one of the std::chrono::duration specializations, such as std::chrono::microseconds. With this type there's no ambiguity about the units used by the tick value. However, keep in mind that the clock doesn't necessarily have this resolution. You can convert a duration to attoseconds without actually having a clock that accurate. A: High Resolution, Low Overhead Timing for Intel Processors If you're on Intel hardware, here's how to read the CPU real-time instruction counter. It will tell you the number of CPU cycles executed since the processor was booted. This is probably the finest-grained counter you can get for performance measurement. Note that this is the number of CPU cycles. On linux you can get the CPU speed from /proc/cpuinfo and divide to get the number of seconds. Converting this to a double is quite handy. When I run this on my box, I get 11867927879484732 11867927879692217 it took this long to call printf: 207485 Here's the Intel developer's guide that gives tons of detail. #include <stdio.h> #include <stdint.h> inline uint64_t rdtsc() { uint32_t lo, hi; __asm__ __volatile__ ( "xorl %%eax, %%eax\n" "cpuid\n" "rdtsc\n" : "=a" (lo), "=d" (hi) : : "%ebx", "%ecx"); return (uint64_t)hi << 32 | lo; } main() { unsigned long long x; unsigned long long y; x = rdtsc(); printf("%lld\n",x); y = rdtsc(); printf("%lld\n",y); printf("it took this long to call printf: %lld\n",y-x); } A: From my experience, and from what I've read across the internet, the answer is "No," it is not guaranteed. It depends on CPU speed, operating system, flavor of Linux, etc. A: Reading the RDTSC is not reliable in SMP systems, since each CPU maintains their own counter and each counter is not guaranteed to by synchronized with respect to another CPU. I might suggest trying clock_gettime(CLOCK_REALTIME). The posix manual indicates that this should be implemented on all compliant systems. It can provide a nanosecond count, but you probably will want to check clock_getres(CLOCK_REALTIME) on your system to see what the actual resolution is. A: @Bernard: I have to admit, most of your example went straight over my head. It does compile, and seems to work, though. Is this safe for SMP systems or SpeedStep? That's a good question... I think the code's ok. From a practical standpoint, we use it in my company every day, and we run on a pretty wide array of boxes, everything from 2-8 cores. Of course, YMMV, etc, but it seems to be a reliable and low-overhead (because it doesn't make a context switch into system-space) method of timing. Generally how it works is: * *declare the block of code to be assembler (and volatile, so the optimizer will leave it alone). *execute the CPUID instruction. In addition to getting some CPU information (which we don't do anything with) it synchronizes the CPU's execution buffer so that the timings aren't affected by out-of-order execution. *execute the rdtsc (read timestamp) execution. This fetches the number of machine cycles executed since the processor was reset. This is a 64-bit value, so with current CPU speeds it will wrap around every 194 years or so. Interestingly, in the original Pentium reference, they note it wraps around every 5800 years or so. *the last couple of lines store the values from the registers into the variables hi and lo, and put that into the 64-bit return value. Specific notes: * *out-of-order execution can cause incorrect results, so we execute the "cpuid" instruction which in addition to giving you some information about the cpu also synchronizes any out-of-order instruction execution. *Most OS's synchronize the counters on the CPUs when they start, so the answer is good to within a couple of nano-seconds. *The hibernating comment is probably true, but in practice you probably don't care about timings across hibernation boundaries. *regarding speedstep: Newer Intel CPUs compensate for the speed changes and returns an adjusted count. I did a quick scan over some of the boxes on our network and found only one box that didn't have it: a Pentium 3 running some old database server. (these are linux boxes, so I checked with: grep constant_tsc /proc/cpuinfo) *I'm not sure about the AMD CPUs, we're primarily an Intel shop, although I know some of our low-level systems gurus did an AMD evaluation. Hope this satisfies your curiosity, it's an interesting and (IMHO) under-studied area of programming. You know when Jeff and Joel were talking about whether or not a programmer should know C? I was shouting at them, "hey forget that high-level C stuff... assembler is what you should learn if you want to know what the computer is doing!" A: You may be interested in Linux FAQ for clock_gettime(CLOCK_REALTIME) A: Wine is actually using gettimeofday() to implement QueryPerformanceCounter() and it is known to make many Windows games work on Linux and Mac. Starts http://source.winehq.org/source/dlls/kernel32/cpu.c#L312 leads to http://source.winehq.org/source/dlls/ntdll/time.c#L448
{ "language": "en", "url": "https://stackoverflow.com/questions/88", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106" }
Q: How do you branch and merge with TortoiseSVN? How do you branch and merge with Apache Subversion using the TortoiseSVN client? A: You can also try Version Control for the Standalone Programmer - Part 1 or perhaps Merging with TortoiseSVN. A: My easy click-by-click instructions (specific to TortoiseSVN) are in Stack Overflow question What is the simplest way to do branching and merging using TortoiseSVN?. A: Version Control with Subversion A very good resource for source control in general. Not really TortoiseSVN specific, though.
{ "language": "en", "url": "https://stackoverflow.com/questions/90", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "164" }
Q: Anatomy of a "Memory Leak" In .NET perspective: * *What is a memory leak? *How can you determine whether your application leaks? What are the effects? *How can you prevent a memory leak? *If your application has memory leak, does it go away when the process exits or is killed? Or do memory leaks in your application affect other processes on the system even after process completion? *And what about unmanaged code accessed via COM Interop and/or P/Invoke? A: I guess in a managed environment, a leak would be you keeping an unnecessary reference to a large chunk of memory around. Absolutely. Also, not using the .Dispose() method on disposable objects when appropriate can cause mem leaks. The easiest way to do it is with a using block because it automatically executes .Dispose() at the end: StreamReader sr; using(sr = new StreamReader("somefile.txt")) { //do some stuff } And if you create a class that is using unmanaged objects, if you're not implementing IDisposable correctly, you could be causing memory leaks for your class's users. A: All memory leaks are resolved by program termination. Leak enough memory and the Operating System may decide to resolve the problem on your behalf. A: I will concur with Bernard as to in .net what a mem leak would be. You could profile your application to see its memory use, and determine that if its managing a lot of memory when it should not be you could say it has a leak. In managed terms I will put my neck on the line to say it does go away once the process is killed/removed. Unmanaged code is its own beast and if a leak exists within it, it will follow a standard mem. leak definition. A: Also keep in mind that .NET has two heaps, one being the large object heap. I believe objects of roughly 85k or larger are put on this heap. This heap has a different lifetime rules than the regular heap. If you are creating large memory structures (Dictionary's or List's) it would prudent to go lookup what the exact rules are. As far as reclaiming the memory on process termination, unless your running Win98 or it equivalents, everything is released back to the OS on termination. The only exceptions are things that are opened cross-process and another process still has the resource open. COM Objects can be tricky tho. If you always use the IDispose pattern, you'll be safe. But I've run across a few interop assemblies that implement IDispose. The key here is to call Marshal.ReleaseCOMObject when you're done with it. The COM Objects still use standard COM reference counting. A: I found .Net Memory Profiler a very good help when finding memory leaks in .Net. It's not free like the Microsoft CLR Profiler, but is faster and more to the point in my opinion. A A: Strictly speaking, a memory leak is consuming memory that is "no longer used" by the program. "No longer used" has more than one meaning, it could mean "no more reference to it", that is, totally unrecoverable, or it could mean, referenced, recoverable, unused but the program keeps the references anyway. Only the later applies to .Net for perfectly managed objects. However, not all classes are perfect and at some point an underlying unmanaged implementation could leak resources permanently for that process. In all cases, the application consumes more memory than strictly needed. The sides effects, depending on the ammount leaked, could go from none, to slowdown caused by excessive collection, to a series of memory exceptions and finally a fatal error followed by forced process termination. You know an application has a memory problem when monitoring shows that more and more memory is allocated to your process after each garbage collection cycle. In such case, you are either keeping too much in memory, or some underlying unmanaged implementation is leaking. For most leaks, resources are recovered when the process is terminated, however some resources are not always recovered in some precise cases, GDI cursor handles are notorious for that. Of course, if you have an interprocess communication mechanism, memory allocated in the other process would not be freed until that process frees it or terminates. A: I think the "what is a memory leak" and "what are the effects" questions have been answered well already, but I wanted to add a few more things on the other questions... How to understand whether your application leaks One interesting way is to open perfmon and add traces for # bytes in all heaps and # Gen 2 collections , in each case looking just at your process. If exercising a particular feature causes the total bytes to increase, and that memory remains allocated after the next Gen 2 collection, you might say that the feature leaks memory. How to prevent Other good opinions have been given. I would just add that perhaps the most commonly overlooked cause of .NET memory leaks is to add event handlers to objects without removing them. An event handler attached to an object is a form of reference to that object, so will prevent collection even after all other references have gone. Always remember to detach event handlers (using the -= syntax in C#). Does the leak go away when the process exits, and what about COM interop? When your process exits, all memory mapped into its address space is reclaimed by the OS, including any COM objects served from DLLs. Comparatively rarely, COM objects can be served from separate processes. In this case, when your process exits, you may still be responsible for memory allocated in any COM server processes that you used. A: I would define memory leaks as an object not freeing up all the memory allocated after it has completed. I have found this can happen in your application if you are using Windows API and COM (i.e. unmanaged code that has a bug in it or is not being managed correctly), in the framework and in third party components. I have also found not tiding up after using certain objects like pens can cause the issue. I personally have suffered Out of Memory Exceptions which can be caused but are not exclusive to memory leaks in dot net applications. (OOM can also come from pinning see Pinning Artical). If you are not getting OOM errors or need to confirm if it is a memory leak causing it then the only way is to profile your application. I would also try and ensure the following: a) Everything that implements Idisposable is disposed either using a finally block or the using statement these include brushes, pens etc.(some people argue to set everything to nothing in addition) b)Anything that has a close method is closed again using finally or the using statement (although I have found using does not always close depending if you declared the object outside the using statement) c)If you are using unmanaged code/windows API's that these are dealt with correctly after. (some have clean up methods to release resources) Hope this helps. A: If you need to diagnose a memory leak in .NET, check these links: http://msdn.microsoft.com/en-us/magazine/cc163833.aspx http://msdn.microsoft.com/en-us/magazine/cc164138.aspx Those articles describe how to create a memory dump of your process and how to analyze it so that you can first determine if your leak is unmanaged or managed, and if it is managed, how to figure out where it is coming from. Microsoft also has a newer tool to assist with generating crash dumps, to replace ADPlus, called DebugDiag. http://www.microsoft.com/downloads/details.aspx?FamilyID=28bd5941-c458-46f1-b24d-f60151d875a3&displaylang=en A: Using CLR Profiler from Microsoft http://www.microsoft.com/downloads/details.aspx?familyid=86ce6052-d7f4-4aeb-9b7a-94635beebdda&displaylang=en is a great way to determine which objects are holding memory, what execution flow leads to the creation of these objects, and also monitoring which objects live where on the heap (fragmentation, LOH, etc.). A: The best explanation of how the garbage collector works is in Jeff Richters CLR via C# book, (Ch. 20). Reading this gives a great grounding for understanding how objects persist. One of the most common causes of rooting objects accidentally is by hooking up events outisde a class. If you hook up an external event e.g. SomeExternalClass.Changed += new EventHandler(HandleIt); and forget to unhook to it when you dispose, then SomeExternalClass has a ref to your class. As mentioned above, the SciTech memory profiler is excellent at showing you roots of objects you suspect are leaking. But there is also a very quick way to check a particular type is just use WnDBG (you can even use this in the VS.NET immediate window while attached): .loadby sos mscorwks !dumpheap -stat -type <TypeName> Now do something that you think will dispose the objects of that type (e.g. close a window). It's handy here to have a debug button somewhere that will run System.GC.Collect() a couple of times. Then run !dumpheap -stat -type <TypeName> again. If the number didn't go down, or didn't go down as much as you expect, then you have a basis for further investigation. (I got this tip from a seminar given by Ingo Rammer). A: I guess in a managed environment, a leak would be you keeping an unnecessary reference to a large chunk of memory around. A: The best explanation I've seen is in Chapter 7 of the free Foundations of Programming e-book. Basically, in .NET a memory leak occurs when referenced objects are rooted and thus cannot be garbage collected. This occurs accidentally when you hold on to references beyond the intended scope. You'll know that you have leaks when you start getting OutOfMemoryExceptions or your memory usage goes up beyond what you'd expect (PerfMon has nice memory counters). Understanding .NET's memory model is your best way of avoiding it. Specifically, understanding how the garbage collector works and how references work — again, I refer you to chapter 7 of the e-book. Also, be mindful of common pitfalls, probably the most common being events. If object A is registered to an event on object B, then object A will stick around until object B disappears because B holds a reference to A. The solution is to unregister your events when you're done. Of course, a good memory profile will let you see your object graphs and explore the nesting/referencing of your objects to see where references are coming from and what root object is responsible (red-gate ants profile, JetBrains dotMemory, memprofiler are really good choices, or you can use the text-only WinDbg and SOS, but I'd strongly recommend a commercial/visual product unless you're a real guru). I believe unmanaged code is subject to its typical memory leaks, except that shared references are managed by the garbage collector. I could be wrong about this last point. A: Why do people think that an memory leak in .NET is not the same as any other leak? A memory leak is when you attach to a resource and do not let it go. You can do this both in managed and in unmanaged coding. Regarding .NET, and other programming tools, there have been ideas about garbage collecting, and other ways of minimizing situations that will make your application leak. But the best method of preventing memory leaks is that you need to understand your underlying memory model, and how things works, on the platform you are using. Believing that GC and other magic will clean up your mess is the short way to memory leaks, and will be difficult to find later. When coding unmanaged, you normally make sure to clean up, you know that the resources you take hold of, will be your responsibility to clean up, not the janitor's. In .NET on the other hand, a lot of people think that the GC will clean up everything. Well, it does some for you, but you need to make sure that it is so. .NET does wrap lots of things, so you do not always know if you are dealing with a managed or unmanaged resource, and you need to make sure what what you're dealing with. Handling fonts, GDI resources, active directory, databases etc is typically things you need to look out for. In managed terms I will put my neck on the line to say it does go away once the process is killed/removed. I see lots of people have this though, and I really hope this will end. You cannot ask the user to terminate your app to clean up your mess! Take a look at a browser, that can be IE, FF etc, then open, say, Google Reader, let it stay for some days, and look at what happens. If you then open another tab in the browser, surf to some site, then close the tab that hosted the other page that made the browser leak, do you think the browser will release the memory? Not so with IE. On my computer IE will easily eat 1 GiB of memory in a short amount of time (about 3-4 days) if I use Google Reader. Some newspages are even worse. A: One definition is: Unable to release unreachable memory, which can no longer be allocated to new process during execution of allocating process. It can mostly be cured by using GC techniques or detected by automated tools. For more information, please visit http://all-about-java-and-weblogic-server.blogspot.in/2014/01/what-is-memory-leak-in-java.html.
{ "language": "en", "url": "https://stackoverflow.com/questions/104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "181" }
Q: Best Subversion clients for Windows Vista (64bit) I've been using TortoiseSVN in a Windows environment for quite some time. It seems very feature-complete and nicely integrated into the Windows shell, and more importantly, it's fairly painless to teach to colleagues with little or no experience with source control. However, since we have moved to Windows Vista 64bit, Tortoise has been very buggy and has seemed to cause lots of explorer.exe abnormalities and crashes. This has happened both with older versions of the software and the latest version (1.5.1 build 13563). I was curious if anyone has suggestions for other Subversion clients that will run on Windows (specifically Vista 64bit). Developers here use a variety of text editors so using Visual Studio or Dreamweaver for SVN is not ideal. I have heard great things about Cornerstone, and would love something similar for Windows if it exists. I'm correlating the Vista/explorer problems with Tortoise because they normally occur when I'm using the functionality in Tortoise. Sometimes bringing up the "merge" screen will cause the GUI to start acting very strange and eventually hang or crash. I did not see 1.5.2 -- I'm installing now, maybe that will fix some of my issues. A: I'll second Diago's answer. I use TortoiseSVN on Vista x64 pretty heavily. I did upgrade directly from an older version to 1.5.2 though, and never used 1.5.1. Have you tried 1.5.2? A: I used to have lots of Explorer crashes (on 32-bit) caused by Tortoise. They seem to have gone away since I used the Include/Exclude path settings in the "Icon Overlays" configuration of TSVN. Constraining icon overlays to specific directories where I keep my source made this much more stable. A: I too get explorer crashes in Vista (I'm not in the 64Bit version though). I'm using Vista Super Saijen (or whatever they are calling the most expensive version). I'm not having any bugs with Tortoise. My explorer does, however, crash about every other day (sometimes multiple times a day if it's having an "off" day). I'm not positive it's being caused by TortoiseSVN though. From what I hear, the explorer just crashes a lot in Vista... Have you tried uninstalling Tortoise and using Windows for a day or two and seeing if it still crashes? Do you restart your computer at least once a day (It seems the longer I go between restarts, the worse the crashes get)? A: Tortoise SVN with Ankhsvn for VS 2005 A: Run both 32 and 64 bit clients.... otherwise explorer instances launched from 32bit processes ( including load and save dialogs) will have no Tortoise menus. Also upgrade to latest 1.5.3 at time of answer. A: I have Tortoise installed but rarely use it over SmartSVN. It is a Java-based application and so does not look like a native Windows application, but performs very well. There is a free version with reduced functionality, but the paid-for version is not very expensive ($79) and well worth the money. The biggest benefit I find is a real-time view similar to the "check for modifications" feature in Tortoise, which auto-refreshes every time the UI gets focus. You can easily see what you've changed across your whole source tree. It also has shell integration, although I can't comment on that feature as I haven't installed it because I already had Tortoise installed. A: I have been using the 64Bit version of TortoiseSVN for ages and I have never had issues with it on Windows 64Bit or Vista 64Bit. I am currently not aware of any other similiar SVN clients that do work on Vista. Is it possible the problem could lie within the configuration of TortoiseSVN or even the installation of Vista? Is the problem occurring on Vista native or SP 1? A: TortoiseSVN in combination with VisualSVN for Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Decoding T-SQL CAST in C#/VB.NET Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this: DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time. I tried to write a simple decode tool, so I could decode this type of text without even touching SQL  Server. The main part I need to be decoded is: CAST(0x44004500...06F007200 AS NVARCHAR(4000)) I've tried all of the following commands with no luck: txtDecodedText.Text = System.Web.HttpUtility.UrlDecode(txtURLText.Text); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text)); What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too. Okay, I'm sure I'm missing something here, so here's where I'm at. Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this... txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text)); ...and all it did was return the exact same thing that I put in since it converted each character into is a byte. I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this: while (!boolIsDone) { bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2)); bytURL[intURLIndex] = bytURLChar; intParseIndex += 2; intURLIndex++; if (txtURLText.Text.Length - intParseIndex < 2) { boolIsDone = true; } } txtDecodedText.Text = Encoding.UTF8.GetString(bytURL); Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format. Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result. How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it? A: Try removing the 0x first and then call Encoding.UTF8.GetString. I think that may work. Essentially: 0x44004500 Remove the 0x, and then always two bytes are one character: 44 00 = D 45 00 = E 6F 00 = o 72 00 = r So it's definitely a Unicode/UTF format with two bytes/character. A: I went back to Michael's post, did some more poking and realized that I did need to do a double conversion, and eventually worked out this little nugget: Convert.ToString(Convert.ToChar(Int32.Parse(EncodedString.Substring(intParseIndex, 2), System.Globalization.NumberStyles.HexNumber))); From there I simply made a loop to go through all the characters 2 by 2 and get them "hexified" and then translated to a string. To Nick, and anybody else interested, I went ahead and posted my little application over in CodePlex. Feel free to use/modify as you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: ASP.NET Site Maps Does anyone have experience creating SQL-based ASP.NET site-map providers? I have the default XML file web.sitemap working properly with my Menu and SiteMapPath controls, but I'll need a way for the users of my site to create and modify pages dynamically. I need to tie page viewing permissions into the standard ASP.NET membership system as well. A: The Jeff Prosise version from MSDN magazine works pretty well, but it has a few flaws: AddNode freaks out with links to external sites on your menu (www.google.com, etc.) Here's my fix in BuildSiteMap(): SiteMapNode node = GetSiteMapNodeFromReader(reader); string url = node.Url; if (url.Contains(":")) { string garbage = Guid.NewGuid().ToString(); // SiteMapNode needs unique URLs node.Url = "~/dummy_" + garbage + ".aspx"; AddNode(node, _root); node.Url = url; } else { AddNode(node, _root); } SQLDependency caching is cool, but if you don't want to make a trip to the DB everytime your menu loads (to check to see if the dependency has changed) and your menus don't change very often, then why not use HttpRuntime.Cache instead? public override SiteMapNode RootNode { get { SiteMapNode temp = (SiteMapNode)HttpRuntime.Cache["SomeKeyName"]; if (temp == null) { temp = BuildSiteMap(); HttpRuntime.Cache.Insert("SomeKeyName", temp, null, DateTime.Now.AddHours(1), Cache.NoSlidingExpiration); } return temp; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Java lib or app to convert CSV to XML file? Is there an existing application or library in Java which will allow me to convert a CSV data file to XML file? The XML tags would be provided through possibly the first row containing column headings. A: As far as I know, there's no ready-made library to do this for you, but producing a tool capable of translating from CSV to XML should only require you to write a crude CSV parser and hook up JDOM (or your XML Java library of choice) with some glue code. A: There is nothing I know of that can do this without you at least writing a little bit of code... You will need 2 separate library: * *A CSV Parser Framework *An XML Serialization Framework The CSV parser I would recommend (unless you want to have a little bit of fun to write your own CSV Parser) is OpenCSV (A SourceForge Project for parsing CSV Data) The XML Serialization Framework should be something that can scale in case you want to transform large (or huge) CSV file to XML: My recommendation is the Sun Java Streaming XML Parser Framework (See here) which allows pull-parsing AND serialization. A: There is also good library ServingXML by Daniel Parker, which is able to convert almost any plain text format to XML and back. The example for your case can be found here: It uses heading of field in CSV file as the XML element name. A: Maybe this might help: JSefa You can read CSV file with this tool and serialize it to XML. A: As the others above, I don't know any one-step way to do that, but if you are ready to use very simple external libraries, I would suggest: OpenCsv for parsing CSV (small, simple, reliable and easy to use) Xstream to parse/serialize XML (very very easy to use, and creating fully human readable xml) Using the same sample data as above, code would look like: package fr.megiste.test; import java.io.FileReader; import java.io.FileWriter; import java.util.ArrayList; import java.util.List; import au.com.bytecode.opencsv.CSVReader; import com.thoughtworks.xstream.XStream; public class CsvToXml { public static void main(String[] args) { String startFile = "./startData.csv"; String outFile = "./outData.xml"; try { CSVReader reader = new CSVReader(new FileReader(startFile)); String[] line = null; String[] header = reader.readNext(); List out = new ArrayList(); while((line = reader.readNext())!=null){ List<String[]> item = new ArrayList<String[]>(); for (int i = 0; i < header.length; i++) { String[] keyVal = new String[2]; String string = header[i]; String val = line[i]; keyVal[0] = string; keyVal[1] = val; item.add(keyVal); } out.add(item); } XStream xstream = new XStream(); xstream.toXML(out, new FileWriter(outFile,false)); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } Producing the following result: (Xstream allows very fine tuning of the result...) <list> <list> <string-array> <string>string</string> <string>hello world</string> </string-array> <string-array> <string>float1</string> <string>1.0</string> </string-array> <string-array> <string>float2</string> <string>3.3</string> </string-array> <string-array> <string>integer</string> <string>4</string> </string-array> </list> <list> <string-array> <string>string</string> <string>goodbye world</string> </string-array> <string-array> <string>float1</string> <string>1e9</string> </string-array> <string-array> <string>float2</string> <string>-3.3</string> </string-array> <string-array> <string>integer</string> <string>45</string> </string-array> </list> <list> <string-array> <string>string</string> <string>hello again</string> </string-array> <string-array> <string>float1</string> <string>-1</string> </string-array> <string-array> <string>float2</string> <string>23.33</string> </string-array> <string-array> <string>integer</string> <string>456</string> </string-array> </list> <list> <string-array> <string>string</string> <string>hello world 3</string> </string-array> <string-array> <string>float1</string> <string>1.40</string> </string-array> <string-array> <string>float2</string> <string>34.83</string> </string-array> <string-array> <string>integer</string> <string>4999</string> </string-array> </list> <list> <string-array> <string>string</string> <string>hello 2 world</string> </string-array> <string-array> <string>float1</string> <string>9981.05</string> </string-array> <string-array> <string>float2</string> <string>43.33</string> </string-array> <string-array> <string>integer</string> <string>444</string> </string-array> </list> </list> A: This may be too basic or limited of a solution, but couldn't you do a String.split() on each line of the file, remembering the result array of the first line to generate the XML, and just spit each line's array data out with the proper XML elements padding each iteration of a loop? A: Jackson processor family has backends for multiple data formats, not just JSON. This includes both XML (https://github.com/FasterXML/jackson-dataformat-xml) and CSV (https://github.com/FasterXML/jackson-dataformat-csv/) backends. Conversion would rely on reading input with CSV backend, write using XML backend. This is easiest to do if you have (or can define) a POJO for per-row (CSV) entries. This is not a strict requirement, as content from CSV may be read "untyped" as well (a sequence of String arrays), but requires bit more work on XML output. For XML side, you would need a wrapper root object to contain array or List of objects to serialize. A: I know you asked for Java, but this strikes me as a task well suited to a scripting language. Here is a quick (very simple) solution written in Groovy. test.csv string,float1,float2,integer hello world,1.0,3.3,4 goodbye world,1e9,-3.3,45 hello again,-1,23.33,456 hello world 3,1.40,34.83,4999 hello 2 world,9981.05,43.33,444 csvtoxml.groovy #!/usr/bin/env groovy def csvdata = [] new File("test.csv").eachLine { line -> csvdata << line.split(',') } def headers = csvdata[0] def dataRows = csvdata[1..-1] def xml = new groovy.xml.MarkupBuilder() // write 'root' element xml.root { dataRows.eachWithIndex { dataRow, index -> // write 'entry' element with 'id' attribute entry(id:index+1) { headers.eachWithIndex { heading, i -> // write each heading with associated content "${heading}"(dataRow[i]) } } } } Writes the following XML to stdout: <root> <entry id='1'> <string>hello world</string> <float1>1.0</float1> <float2>3.3</float2> <integer>4</integer> </entry> <entry id='2'> <string>goodbye world</string> <float1>1e9</float1> <float2>-3.3</float2> <integer>45</integer> </entry> <entry id='3'> <string>hello again</string> <float1>-1</float1> <float2>23.33</float2> <integer>456</integer> </entry> <entry id='4'> <string>hello world 3</string> <float1>1.40</float1> <float2>34.83</float2> <integer>4999</integer> </entry> <entry id='5'> <string>hello 2 world</string> <float1>9981.05</float1> <float2>43.33</float2> <integer>444</integer> </entry> </root> However, the code does very simple parsing (not taking into account quoted or escaped commas) and it does not account for possible absent data. A: For the CSV Part, you may use my little open source library A: I had the same problem and needed an application to convert a CSV file to a XML file for one of my projects, but didn't find anything free and good enough on the net, so I coded my own Java Swing CSVtoXML application. It's available from my website HERE. Hope it will help you. If not, you can easily code your own like I did; The source code is inside the jar file so modify it as you need if it doesn't fill your requirement. A: I have an opensource framework for working with CSV and flat files in general. Maybe it's worth looking: JFileHelpers. With that toolkit you can write code using beans, like: @FixedLengthRecord() public class Customer { @FieldFixedLength(4) public Integer custId; @FieldAlign(alignMode=AlignMode.Right) @FieldFixedLength(20) public String name; @FieldFixedLength(3) public Integer rating; @FieldTrim(trimMode=TrimMode.Right) @FieldFixedLength(10) @FieldConverter(converter = ConverterKind.Date, format = "dd-MM-yyyy") public Date addedDate; @FieldFixedLength(3) @FieldOptional public String stockSimbol; } and then just parse your text files using: FileHelperEngine<Customer> engine = new FileHelperEngine<Customer>(Customer.class); List<Customer> customers = new ArrayList<Customer>(); customers = engine.readResource( "/samples/customers-fixed.txt"); And you'll have a collection of parsed objects. Hope that helps! A: This solution does not need any CSV or XML libraries and, I know, it does not handle any illegal characters and encoding issues, but you might be interested in it as well, provided your CSV input does not break the above mentioned rules. Attention: You should not use this code unless you know what you do or don't have the chance to use a further library (possible in some bureaucratic projects)... Use a StringBuffer for older Runtime Environments... So here we go: BufferedReader reader = new BufferedReader(new InputStreamReader( Csv2Xml.class.getResourceAsStream("test.csv"))); StringBuilder xml = new StringBuilder(); String lineBreak = System.getProperty("line.separator"); String line = null; List<String> headers = new ArrayList<String>(); boolean isHeader = true; int count = 0; int entryCount = 1; xml.append("<root>"); xml.append(lineBreak); while ((line = reader.readLine()) != null) { StringTokenizer tokenizer = new StringTokenizer(line, ","); if (isHeader) { isHeader = false; while (tokenizer.hasMoreTokens()) { headers.add(tokenizer.nextToken()); } } else { count = 0; xml.append("\t<entry id=\""); xml.append(entryCount); xml.append("\">"); xml.append(lineBreak); while (tokenizer.hasMoreTokens()) { xml.append("\t\t<"); xml.append(headers.get(count)); xml.append(">"); xml.append(tokenizer.nextToken()); xml.append("</"); xml.append(headers.get(count)); xml.append(">"); xml.append(lineBreak); count++; } xml.append("\t</entry>"); xml.append(lineBreak); entryCount++; } } xml.append("</root>"); System.out.println(xml.toString()); The input test.csv (stolen from another answer on this page): string,float1,float2,integer hello world,1.0,3.3,4 goodbye world,1e9,-3.3,45 hello again,-1,23.33,456 hello world 3,1.40,34.83,4999 hello 2 world,9981.05,43.33,444 The resulting output: <root> <entry id="1"> <string>hello world</string> <float1>1.0</float1> <float2>3.3</float2> <integer>4</integer> </entry> <entry id="2"> <string>goodbye world</string> <float1>1e9</float1> <float2>-3.3</float2> <integer>45</integer> </entry> <entry id="3"> <string>hello again</string> <float1>-1</float1> <float2>23.33</float2> <integer>456</integer> </entry> <entry id="4"> <string>hello world 3</string> <float1>1.40</float1> <float2>34.83</float2> <integer>4999</integer> </entry> <entry id="5"> <string>hello 2 world</string> <float1>9981.05</float1> <float2>43.33</float2> <integer>444</integer> </entry> </root> A: I don't understand why you would want to do this. It sounds almost like cargo cult coding. Converting a CSV file to XML doesn't add any value. Your program is already reading the CSV file, so arguing that you need XML doesn't work. On the other hand, reading the CSV file, doing something with the values, and then serializing to XML does make sense (well, as much as using XML can make sense... ;)) but you would supposedly already have a means of serializing to XML. A: The big difference is that JSefa brings in is that it can serialize your java objects to CSV/XML/etc files and can deserialize back to java objects. And it's driven by annotations which gives you lot of control over the output. JFileHelpers also looks interesting. A: You can do this exceptionally easily using Groovy, and the code is very readable. Basically, the text variable will be written to contacts.xml for each line in the contactData.csv, and the fields array contains each column. def file1 = new File('c:\\temp\\ContactData.csv') def file2 = new File('c:\\temp\\contacts.xml') def reader = new FileReader(file1) def writer = new FileWriter(file2) reader.transformLine(writer) { line -> fields = line.split(',') text = """<CLIENTS> <firstname> ${fields[2]} </firstname> <surname> ${fields[1]} </surname> <email> ${fields[9]} </email> <employeenumber> password </employeenumber> <title> ${fields[4]} </title> <phone> ${fields[3]} </phone> </CLIENTS>""" } A: You could use XSLT. Google it and you will find a few examples e.g. CSV to XML If you use XSLT you can then convert the XML to whatever format you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119" }
Q: How would you access Object properties from within an object method? What is the "purist" or "correct" way to access an object's properties from within an object method that is not a getter/setter method? I know that from outside of the object you should use a getter/setter, but from within would you just do: Java: String property = this.property; PHP: $property = $this->property; or would you do: Java: String property = this.getProperty(); PHP: $property = $this->getProperty(); Forgive me if my Java is a little off, it's been a year since I programmed in Java... EDIT: It seems people are assuming I am talking about private or protected variables/properties only. When I learned OO I was taught to use getters/setters for every single property even if it was public (and actually I was told never to make any variable/property public). So, I may be starting off from a false assumption from the get go. It appears that people answering this question are maybe saying that you should have public properties and that those don't need getters and setters, which goes against what I was taught, and what I was talking about, although maybe that needs to be discussed as well. That's probably a good topic for a different question though... A: If you mean "most encapsulation" by "purist", then I typically declare all my fields as private and then use "this.field" from within the class itself. For other classes, including subclasses, I access instance state using the getters. A: The question doesn't require an opinion based answer. It is a subject well covered by computing science for decades from the principle of high cohesion, low coupling and the SOLID principles. The purist, read correct, OO way is to minimise coupling and maximise cohesions. Therefore both should be avoided and the Law of Demeter followed by using the Tell Don't Ask approach. Instead of getting the value of the object's property, which tightly couples the two class, use the object as a parameter e.g. doSomethingWithProperty() { doSomethingWith( this.property ) ; } Where the property was a native type, e.g. int, use an access method, name it for problem domain not the programming domain. doSomethingWithProperty( this.daysPerWeek() ) ; These will allow you to maintain encapsulation and any post-conditions or dependent invariants. You can also use the setter method to maintain any pre-conditions or dependent invariants, however don't fall into the trap of naming them setters, go back to the Hollywood Principle for naming when using the idiom. A: It is better to use the accessor methods, even within the object. Here are the points that come to my mind immediately: * *It should be done in the interest of maintaining consistency with accesses made from outside the object. *In some cases, these accessor methods could be doing more than just accessing the field; they could be doing some additional processing (this is rare though). If this is the case, accessing the field directly would mean that you are missing that additional processing, and your program could go awry if this processing is always to be done during those accesses. A: I can be wrong because I'm autodidact, but I NEVER user public properties in my Java classes, they are always private or protected, so that outside code must access by getters/setters. It's better for maintenance / modification purposes. And for inside class code... If getter method is trivial I use the property directly, but I always use the setter methods because I could easily add code to fire events if I wish. A: i've found using setters/getters made my code easier to read. I also like the control it gives when other classes use the methods and if i change the data the property will store. A: Private fields with public or protected properties. Access to the values should go through the properties, and be copied to a local variable if they will be used more than once in a method. If and ONLY if you have the rest of your application so totally tweaked, rocked out, and otherwise optimized to where accessing values by going through their assosciated properties has become a bottleneck (And that will never EVER happen, I guarantee) should you even begin to consider letting anything other than the properties touch their backing variables directly. .NET developers can use automatic properties to enforce this since you can't even see the backing variables at design time. A: If I don't edit the property, I'll use a public method get_property() unless it's a special occasion such as a MySQLi object inside another object in which case I'll just make the property public and refer to it as $obj->object_property. Inside the object it's always $this->property for me. A: It depends. It's more a style issue than anything else, and there is no hard rule. A: This has religious war potential, but it seems to me that if you're using a getter/setter, you should use it internally as well - using both will lead to maintenance problems down the road (e.g. somebody adds code to a setter that needs to run every time that property is set, and the property is being set internally w/o that setter being called). A: Well, it seems with C# 3.0 properties' default implementation, the decision is taken for you; you HAVE to set the property using the (possibly private) property setter. I personally only use the private member-behind when not doing so would cause the object to fall in an less than desirable state, such as when initializing or when caching/lazy loading is involved. A: I like the answer by cmcculloh, but it seems like the most correct one is the answer by Greg Hurlman. Use getter/setter all the time if you started using them from the get-go and/or you are used to working with them. As an aside, I personally find that using getter/setter makes the code easier to read and to debug later on. A: As stated in some of the comments: Sometimes you should, sometimes you shouldn't. The great part about private variables is that you are able to see all the places they are used when you change something. If your getter/setter does something you need, use it. If it doesn't matter you decide. The opposite case could be made that if you use the getter/setter and somebody changes the getter/setter they have to analyze all the places the getter and setter is used internally to see if it messes something up. A: Personally, I feel like it's important to remain consistent. If you have getters and setters, use them. The only time I would access a field directly is when the accessor has a lot of overhead. It may feel like you're bloating your code unnecessarily, but it can certainly save a whole lot of headache in the future. The classic example: Later on, you may desire to change the way that field works. Maybe it should be calculated on-the-fly or maybe you would like to use a different type for the backing store. If you are accessing properties directly, a change like that can break an awful lot of code in one swell foop. A: I'm fairly surprised at how unanimous the sentiment is that getters and setters are fine and good. I suggest the incendiary article by Allen Holub "Getters And Setters Are Evil". Granted, the title is for shock value, but the author makes valid points. Essentially, if you have getters and setters for each and every private field, you are making those fields as good as public. You'd be very hard-pressed to change the type of a private field without ripple effects to every class that calls that getter. Moreover, from a strictly OO point of view, objects should be responding to messages (methods) that correspond to their (hopefully) single responsibility. The vast majority of getters and setters don't make sense for their constituent objects;Pen.dispenseInkOnto(Surface) makes more sense to me than Pen.getColor(). Getters and setters also encourage users of the class to ask the object for some data, perform a calculation, and then set some other value in the object, better known as procedural programming. You'd be better served to simply tell the object to do what you were going to in the first place; also known as the Information Expert idiom. Getters and setters, however, are necessary evils at the boundary of layers -- UI, persistence, and so forth. Restricted access to a class's internals, such as C++'s friend keyword, Java's package protected access, .NET's internal access, and the Friend Class Pattern can help you reduce the visibility of getters and setters to only those who need them. A: It depends on how the property is used. For example, say you have a student object that has a name property. You could use your Get method to pull the name from the database, if it hasn't been retrieved already. This way you are reducing unnecessary calls to the database. Now let's say you have a private integer counter in your object that counts the number of times the name has been called. You may want to not use the Get method from inside the object because it would produce an invalid count. A: PHP offers a myriad of ways to handle this, including magic methods __get and __set, but I prefer explicit getters and setters. Here's why: * *Validation can be placed in setters (and getters for that matter) *Intellisense works with explicit methods *No question whether a property is read only, write only or read-write *Retrieving virtual properties (ie, calculated values) looks the same as regular properties *You can easily set an object property that is never actually defined anywhere, which then goes undocumented A: Am I just going overboard here? Perhaps ;) Another approach would be to utilize a private/protected method to actually do the getting (caching/db/etc), and a public wrapper for it that increments the count: PHP: public function getName() { $this->incrementNameCalled(); return $this->_getName(); } protected function _getName() { return $this->name; } and then from within the object itself: PHP: $name = $this->_getName(); This way you can still use that first argument for something else (like sending a flag for whether or not to used cached data here perhaps). A: I must be missing the point here, why would you use a getter inside an object to access a property of that object? Taking this to its conclusion the getter should call a getter, which should call a getter. So I'd say inside an object method access a property directly, especially seeing as calling another method in that object (which will just access the property directly anyway then return it) is just a pointless, wasteful exercise (or have I misunderstood the question).
{ "language": "en", "url": "https://stackoverflow.com/questions/126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "105" }
Q: How to export data from SQL Server 2005 to MySQL I've been banging my head against SQL Server 2005 trying to get a lot of data out. I've been given a database with nearly 300 tables in it and I need to turn this into a MySQL database. My first call was to use bcp but unfortunately it doesn't produce valid CSV - strings aren't encapsulated, so you can't deal with any row that has a string with a comma in it (or whatever you use as a delimiter) and I would still have to hand write all of the create table statements, as obviously CSV doesn't tell you anything about the data types. What would be better is if there was some tool that could connect to both SQL Server and MySQL, then do a copy. You lose views, stored procedures, trigger, etc, but it isn't hard to copy a table that only uses base types from one DB to another... is it? Does anybody know of such a tool? I don't mind how many assumptions it makes or what simplifications occur, as long as it supports integer, float, datetime and string. I have to do a lot of pruning, normalising, etc. anyway so I don't care about keeping keys, relationships or anything like that, but I need the initial set of data in fast! A: SQL Server 2005 "Standard", "Developer" and "Enterprise" editions have SSIS, which replaced DTS from SQL server 2000. SSIS has a built-in connection to its own DB, and you can find a connection that someone else has written for MySQL. Here is one example. Once you have your connections, you should be able to create an SSIS package that moves data between the two. I ddin't have to move data from SQLServer to MySQL, but I imagine that once the MySQL connection is installed, it works the same as moving data between two SQLServer DBs, which is pretty straight forward. A: Using MSSQL Management Studio i've transitioned tables with the MySQL OLE DB. Right click on your database and go to "Tasks->Export Data" from there you can specify a MsSQL OLE DB source, the MySQL OLE DB source and create the column mappings between the two data sources. You'll most likely want to setup the database and tables in advance on the MySQL destination (the export will want to create the tables automatically, but this often results in failure). You can quickly create the tables in MySQL using the "Tasks->Generate Scripts" by right clicking on the database. Once your creation scripts are generated you'll need to step through and search/replace for keywords and types that exist in MSSQL to MYSQL. Of course you could also backup the database like normal and find a utility which will restore the MSSQL backup on MYSQL. I'm not sure if one exists however. A: Rolling your own PHP solution will certainly work though I'm not sure if there is a good way to automatically duplicate the schema from one DB to the other (maybe this was your question). If you are just copying data, and/or you need custom code anyway to convert between modified schemas between the two DB's, I would recommend using PHP 5.2+ and the PDO libraries. You'll be able to connect using PDO ODBC (and use MSSQL drivers). I had a lot of problems getting large text fields and multi-byte characters from MSSQL into PHP using other libraries. A: Another tool to try would be the SQLMaestro suite. It is a little tricky nailing down the precise tool, but they have a variety of tools, both free and for purchase that handle a wide variety of tasks for multiple database platforms. I'd suggest trying the Data Wizard tool first for MySQL, since I believe that will have the proper "import" tool you need. A: The best way that I have found is the MySQL Migration Toolkit provided by MySQL. I have used it successfully for some large migration projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: XSD DataSets and ignoring foreign keys I have a pretty standard table set-up in a current application using the .NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server. When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data. However, when I go to my administrative page to modify the department data, I typically do something like this: Dim dtDepartment As New DepartmentDataTable() Dim taDepartment As New DepartmentTableAdapter() taDepartment.Fill(dtDepartment) However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled. How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice. A: You can try turning Check-constraints off on the DataSet (it's in its properties), or altering the properties of that relationship, and change the key to a simple reference - up to you.
{ "language": "en", "url": "https://stackoverflow.com/questions/134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Compressing / Decompressing Folders & Files Does anyone know of a good way to compress or decompress files and folders in C# quickly? Handling large files might be necessary. A: My answer would be close your eyes and opt for DotNetZip. It's been tested by a large community. A: GZipStream is a really good utility to use. A: This is very easy to do in java, and as stated above you can reach into the java.util.zip libraries from C#. For references see: java.util.zip javadocs sample code I used this a while ago to do a deep (recursive) zip of a folder structure, but I don't think I ever used the unzipping. If I'm so motivated I may pull that code out and edit it into here later. A: Another good alternative is also DotNetZip. A: The .Net 2.0 framework namespace System.IO.Compression supports GZip and Deflate algorithms. Here are two methods that compress and decompress a byte stream which you can get from your file object. You can substitute GZipStream for DefaultStream in the methods below to use that algorithm. This still leaves the problem of handling files compressed with different algorithms though. public static byte[] Compress(byte[] data) { MemoryStream output = new MemoryStream(); GZipStream gzip = new GZipStream(output, CompressionMode.Compress, true); gzip.Write(data, 0, data.Length); gzip.Close(); return output.ToArray(); } public static byte[] Decompress(byte[] data) { MemoryStream input = new MemoryStream(); input.Write(data, 0, data.Length); input.Position = 0; GZipStream gzip = new GZipStream(input, CompressionMode.Decompress, true); MemoryStream output = new MemoryStream(); byte[] buff = new byte[64]; int read = -1; read = gzip.Read(buff, 0, buff.Length); while (read > 0) { output.Write(buff, 0, read); read = gzip.Read(buff, 0, buff.Length); } gzip.Close(); return output.ToArray(); } A: I've always used the SharpZip Library. Here's a link A: As of .Net 1.1 the only available method is reaching into the java libraries. Using the Zip Classes in the J# Class Libraries to Compress Files and Data with C# Not sure if this has changed in recent versions. A: You can use a 3rd-party library such as SharpZip as Tom pointed out. Another way (without going 3rd-party) is to use the Windows Shell API. You'll need to set a reference to the Microsoft Shell Controls and Automation COM library in your C# project. Gerald Gibson has an example at: Internet Archive's copy of the dead page A: You can create zip file with this method: public async Task<string> CreateZipFile(string sourceDirectoryPath, string name) { var path = HostingEnvironment.MapPath(TempPath) + name; await Task.Run(() => { if (File.Exists(path)) File.Delete(path); ZipFile.CreateFromDirectory(sourceDirectoryPath, path); }); return path; } and then you can unzip zip file with this methods: 1- This method work with zip file path public async Task ExtractZipFile(string filePath, string destinationDirectoryName) { await Task.Run(() => { var archive = ZipFile.Open(filePath, ZipArchiveMode.Read); foreach (var entry in archive.Entries) { entry.ExtractToFile(Path.Combine(destinationDirectoryName, entry.FullName), true); } archive.Dispose(); }); } 2- This method work with zip file stream public async Task ExtractZipFile(Stream zipFile, string destinationDirectoryName) { string filePath = HostingEnvironment.MapPath(TempPath) + Utility.GetRandomNumber(1, int.MaxValue); using (FileStream output = new FileStream(filePath, FileMode.Create)) { await zipFile.CopyToAsync(output); } await Task.Run(() => ZipFile.ExtractToDirectory(filePath, destinationDirectoryName)); await Task.Run(() => File.Delete(filePath)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: How do I track file downloads I have a website that plays mp3s in a flash player. If a user clicks 'play' the flash player automatically downloads an mp3 and starts playing it. Is there an easy way to track how many times a particular song clip (or any binary file) has been downloaded? Is the play link a link to the actual mp3 file or to some javascript code that pops up a player? If the latter, you can easily add your own logging code in there to track the number of hits to it. If the former, you'll need something that can track the web server log itself and make that distinction. My hosting plan comes with Webalizer, which does this nicely. It's a javascript code so that answers that. However, it would be nice to know how to track downloads using the other method (without switching hosts). A: Is the play link a link to the actual mp3 file or to some javascript code that pops up a player? If the latter, you can easily add your own logging code in there to track the number of hits to it. If the former, you'll need something that can track the web server log itself and make that distinction. My hosting plan comes with webalizer, which does this nicely. A: The funny thing is I wrote a php media gallery for all my musics 2 days ago. I had a similar problem. I'm using http://musicplayer.sourceforge.net/ for the player. And the playlist is built via php. All music requests go to a script called xfer.php?file=WHATEVER $filename = base64_url_decode($_REQUEST['file']); header("Cache-Control: public"); header('Content-disposition: attachment; filename='.basename($filename)); header("Content-Transfer-Encoding: binary"); header('Content-Length: '. filesize($filename)); // Put either file counting code here, either a db or static files // readfile($filename); //and spit the user the file function base64_url_decode($input) { return base64_decode(strtr($input, '-_,', '+/=')); } And when you call files use something like: function base64_url_encode($input) { return strtr(base64_encode($input), '+/=', '-_,'); } http://us.php.net/manual/en/function.base64-encode.php If you are using some JavaScript or a flash player (JW player for example) that requires the actual link of an mp3 file or whatever, you can append the text "&type=.mp3" so the final link becomes something like: "www.example.com/xfer.php?file=34842ffjfjxfh&type=.mp3". That way it looks like it ends with an mp3 extension without affecting the file link. A: Is there a database for your music library? If there is any server code that runs when downloading the mp3 then you can add extra code there to increment the play count. You could also have javascript make a second request to increment the play count, but this could lead to people/robots falsely incrementing counts. I used to work for an internet-radio site and we used separate tables to track the time every song was played. Our streams were powered by a perl script running icecast, so we triggered a database request every time a new track started playing. Then to compute the play count we would run a query to count how many times a song's id was in the play log. A: The problem I had with things like AWStats / reading through web server logs is that large downloads can often be split in data chunks within the logs. This makes reconciling the exact number of downloads quite hard. I'd suggest the Google Analytics Event Tracking, as this will register once per click on a download link. A: Use your httpd log files. Install http://awstats.sourceforge.net/ A: Use bash: grep mp3 /var/log/httpd/access_log | wc A: If your song / binary file was served by apache, you can easily grep the access_log to find out the number of downloads. A simple post-logrotate script can grep the logs and maintain your count statistics in a db. This has the performance advantage by not being in your live request code path. Doing non-critical things like stats offline is a good idea to scale your website to large number of users. A: You could even set up an Apache .htaccess directive that converts *.mp3 requests into the querystring dubayou is working with. It might be an elegant way to keep the direct request and still be able to slipstream log function into the response.
{ "language": "en", "url": "https://stackoverflow.com/questions/146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: How do I sync the SVN revision number with my ASP.NET web site? Stack Overflow has a subversion version number at the bottom: svn revision: 679 I want to use such automatic versioning with my .NET Web Site/Application, Windows Forms, WPD projects/solutions. How do I implement this? A: $rev and others like it are revisions for the individual files, so they won't change unless the file changes. The number on the webpage is (most likely, I'm assuming here) the svn revision number for the whole project. That is different than the file revisions, which others have been pointing to. In this case I assume that CCNET is pulling the revision number of the project and rewriting a part of the webpage with that number. Any CI solution should be able to do this, set this up myself with CCNET and Teamcity (although not webpages, but automatic versioning of deployment/assembly versions). In order for you to do this, use a CI solution that supports it, or use your build process (MSbuild/Nant) to store that version and write it to the files before "deploying" it. A: To add to @BradWilson's answer: "You could also get your source control provider to provide the source revision number if you want" To connect Subversion and MSBuild: MSBuild Community Tasks Project A: Looks like Jeff is using CruiseControl.NET based on some leafing through the podcast transcripts. This seems to have automated deployment capabilities from source control to production. Might this be where the insertion is happening? A: We do this with xUnit.net for our automated builds. We use CruiseControl.net (and are trying out TeamCity). The MSBuild task that we run for continuous integration automatically changes the build number for us, so the resulting build ZIP file contains a properly versioned set of DLLs and EXEs. Our MSBuild file contains a UsingTask reference for a DLL which does regular expression replacements: (you're welcome to use this DLL, as it's covered by the MS-PL license as well) <UsingTask AssemblyFile="3rdParty\CodePlex.MSBuildTasks.dll" TaskName="CodePlex.MSBuildTasks.RegexReplace"/> Next, we extract the build number, which is provided automatically by the CI system. You could also get your source control provider to provide the source revision number if you want, but we found the build # in the CI system was more useful, because not only can see the integration results by the CI build number, that also provides a link back to the changeset(s) which were included in the build. <!-- Cascading attempts to find a build number --> <PropertyGroup Condition="'$(BuildNumber)' == ''"> <BuildNumber>$(BUILD_NUMBER)</BuildNumber> </PropertyGroup> <PropertyGroup Condition="'$(BuildNumber)' == ''"> <BuildNumber>$(ccnetlabel)</BuildNumber> </PropertyGroup> <PropertyGroup Condition="'$(BuildNumber)' == ''"> <BuildNumber>0</BuildNumber> </PropertyGroup> (We try BUILD_NUMBER, which is from TeamCity, then ccnetlabel, which is from CC.net, and if neither is present, we default to 0, so that we can test the automated build script manually.) Next, we have a task which sets the build number into a GlobalAssemblyInfo.cs file that we link into all of our projects: <Target Name="SetVersionNumber"> <RegexReplace Pattern='AssemblyVersion\("(\d+\.\d+\.\d+)\.\d+"\)' Replacement='AssemblyVersion("$1.$(BuildNumber)")' Files='GlobalAssemblyInfo.cs'/> <Exec Command="attrib -r xunit.installer\App.manifest"/> </Target> This find the AssemblyVersion attribute, and replaces the a.b.c.d version number with a.b.c.BuildNumber. We will usually leave the source checked into the tree with the first three parts of the builder number fixed, and the fourth at zero (f.e., today it's 1.0.2.0). In your build process, make sure the SetVersionNumber task precedes your build task. At the end, we use our Zip task to zip up the build results so that we have a history of the binaries for every automated build. A: You can do it by adding the following anywhere in your code $Id:$ So for example @Jeff did: <div id="svnrevision">svn revision: $Id:$</div> and when checked in the server replaced $Id:$ with the current revision number. I also found this reference. There is also $Date:$, $Rev:$, $Revision:$ A: If you're using ASP.Net MVC (as StackOverflow does), I've written an easy to follow 3-step guide on how to automatically get and display the latest SVN revision. The guide was inspired by thinking to myself about this very question! :o) A: @Balloon If you are using TortoiseSVN, you can use the packaged SubWCRev program. It queries a working copy and tells you just the highest revision number. Admittedly, this seems to be a client-side approach to a server-side problem, but since it's a nice command line program, you should be able to capture its output for use fairly easily.
{ "language": "en", "url": "https://stackoverflow.com/questions/163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Embedding Windows Media Player for all browsers Edit: This question was written in 2008, which was like 3 internet ages ago. If this question is still relevant to your environment, please accept my condolences. Everyone else should convert into a format supported by your browsers (That would be H.264 if Internet Explorer is needed, and probably AV1, VP8/VP9 if not) and use the <video> element. We are using WMV videos on an internal site, and we are embedding them into web sites. This works quite well on Internet Explorer, but not on Firefox. I've found ways to make it work in Firefox, but then it stops working in Internet Explorer. We do not want to use Silverlight just yet, especially since we cannot be sure that all clients will be running Windows XP with Windows Media Player installed. Is there some sort of Universal Code that embeds WMP into both Internet Explorer and Firefox, or do we need to implement some user-agent-detection and deliver different HTML for different browsers? A: Use the following. It works in Firefox and Internet Explorer. <object id="MediaPlayer1" width="690" height="500" classid="CLSID:22D6F312-B0F6-11D0-94AB-0080C74C7E95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" standby="Loading Microsoft® Windows® Media Player components..." type="application/x-oleobject" > <param name="FileName" value='<%= GetSource() %>' /> <param name="AutoStart" value="True" /> <param name="DefaultFrame" value="mainFrame" /> <param name="ShowStatusBar" value="0" /> <param name="ShowPositionControls" value="0" /> <param name="showcontrols" value="0" /> <param name="ShowAudioControls" value="0" /> <param name="ShowTracker" value="0" /> <param name="EnablePositionControls" value="0" /> <!-- BEGIN PLUG-IN HTML FOR FIREFOX--> <embed type="application/x-mplayer2" pluginspage="http://www.microsoft.com/Windows/MediaPlayer/" src='<%= GetSource() %>' align="middle" width="600" height="500" defaultframe="rightFrame" id="MediaPlayer2" /> And in JavaScript, function playVideo() { try{ if(-1 != navigator.userAgent.indexOf("MSIE")) { var obj = document.getElementById("MediaPlayer1"); obj.Play(); } else { var player = document.getElementById("MediaPlayer2"); player.controls.play(); } } catch(error) { alert(error) } } A: Elizabeth Castro has an interesting article on this problem: Bye Bye Embed. Worth a read on how she attacked this problem, as well as handling QuickTime content. A: You could use conditional comments to get IE and Firefox to do different things <![if !IE]> <p> Firefox only code</p> <![endif]> <!--[if IE]> <p>Internet Explorer only code</p> <![endif]--> The browsers themselves will ignore code that isn't meant for them to read. A: The best way to deploy video on the web is using Flash - it's much easier to embed cleanly into a web page and will play on more or less any browser and platform combination. The only reason to use Windows Media Player is if you're streaming content and you need extraordinarily strong digital rights management, and even then providers are now starting to use Flash even for these. See BBC's iPlayer for a superb example. I would suggest that you switch to Flash even for internal use. You never know who is going to need to access it in the future, and this will give you the best possible future compatibility. EDIT - March 20 2013. Interesting how these old questions resurface from time to time! How different the world is today and how dated this all seems. I would not recommend a Flash only route today by any means - best practice these days would probably be to use HTML 5 to embed H264 encoded video, with a Flash fallback as described here: http://diveintohtml5.info/video.html A: The following works for me in Firefox and Internet Explorer: <object id="mediaplayer" classid="clsid:22d6f312-b0f6-11d0-94ab-0080c74c7e95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#version=5,1,52,701" standby="loading microsoft windows media player components..." type="application/x-oleobject" width="320" height="310"> <param name="filename" value="./test.wmv"> <param name="animationatstart" value="true"> <param name="transparentatstart" value="true"> <param name="autostart" value="true"> <param name="showcontrols" value="true"> <param name="ShowStatusBar" value="true"> <param name="windowlessvideo" value="true"> <embed src="./test.wmv" autostart="true" showcontrols="true" showstatusbar="1" bgcolor="white" width="320" height="310"> </object> A: Encoding flash video is actually very easy with ffmpeg. You can use one command to convert from just about any video format, ffmpeg is smart enough to figure the rest out, and it'll use every processor on your machine. Invoking it is easy: ffmpeg -i input.avi output.flv ffmpeg will guess at the bitrate you want, but if you'd like to specify one, you can use the -b option, so -b 500000 is 500kbps for example. There's a ton of options of course, but I generally get good results without much tinkering. This is a good place to start if you're looking for more options: video options. You don't need a special web server to show flash video. I've done just fine by simply pushing .flv files up to a standard web server, and linking to them with a good swf player, like flowplayer. WMVs are fine if you can be sure that all of your users will always use [a recent, up to date version of] Windows only, but even then, Flash is often a better fit for the web. The player is even extremely skinnable and can be controlled with javascript. A: I found a good article about using the WMP with Firefox on MSDN. Based on MSDN's article and after doing some trials and errors, I found using JavaScript is better than using conditional comments or nested "EMBED/OBJECT" tags. I made a JS function that generate WMP object based on given arguments: <script type="text/javascript"> function generateWindowsMediaPlayer( holderId, // String height, // Number width, // Number videoUrl // String // you can declare more arguments for more flexibility ) { var holder = document.getElementById(holderId); var player = '<object '; player += 'height="' + height.toString() + '" '; player += 'width="' + width.toString() + '" '; videoUrl = encodeURI(videoUrl); // Encode for special characters if (navigator.userAgent.indexOf("MSIE") < 0) { // Chrome, Firefox, Opera, Safari //player += 'type="application/x-ms-wmp" '; //Old Edition player += 'type="video/x-ms-wmp" '; //New Edition, suggested by MNRSullivan (Read Comments) player += 'data="' + videoUrl + '" >'; } else { // Internet Explorer player += 'classid="clsid:6BF52A52-394A-11d3-B153-00C04F79FAA6" >'; player += '<param name="url" value="' + videoUrl + '" />'; } player += '<param name="autoStart" value="false" />'; player += '<param name="playCount" value="1" />'; player += '</object>'; holder.innerHTML = player; } </script> Then I used that function by writing some markups and inline JS like these: <div id='wmpHolder'></div> <script type="text/javascript"> window.addEventListener('load', generateWindowsMediaPlayer('wmpHolder', 240, 320, 'http://mysite.com/path/video.ext')); </script> You can use jQuery.ready instead of window load event to making the codes more backward-compatible and cross-browser. I tested the codes over IE 9-10, Chrome 27, Firefox 21, Opera 12 and Safari 5, on Windows 7/8. A: I have found something that Actually works in both FireFox and IE, on Elizabeth Castro's site (thanks to the link on this site) - I have tried all other versions here, but could not make them work in both the browsers <object classid="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" id="player" width="320" height="260"> <param name="url" value="http://www.sarahsnotecards.com/catalunyalive/fishstore.wmv" /> <param name="src" value="http://www.sarahsnotecards.com/catalunyalive/fishstore.wmv" /> <param name="showcontrols" value="true" /> <param name="autostart" value="true" /> <!--[if !IE]>--> <object type="video/x-ms-wmv" data="http://www.sarahsnotecards.com/catalunyalive/fishstore.wmv" width="320" height="260"> <param name="src" value="http://www.sarahsnotecards.com/catalunyalive/fishstore.wmv" /> <param name="autostart" value="true" /> <param name="controller" value="true" /> </object> <!--<![endif]--> </object> Check her site out: http://www.alistapart.com/articles/byebyeembed/ and the version with the classid in the initial object tag A: May I suggest the jQuery Media Plugin? Provides embed code for all kinds of video, not just WMV and does browser detection, keeping all that messy switch/case statements out of your templates. A: December 2020 : * *We have now Firefox 83.0 and Chrome 87.0 *Internet Explorer is dead, it has been replaced by the new Chromium-based Edge 87.0 *Silverlight is dead *Windows XP is dead *WMV is not a standard : https://www.w3schools.com/html/html_media.asp To answer the question : * *You have to convert your WMV file to another format : MP4, WebM or Ogg video. *Then embed it in your page with the HTML 5 <video> element. I think this question should be closed.
{ "language": "en", "url": "https://stackoverflow.com/questions/164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: How to do version control for SQL Server database? I want to get my databases under version control. I'll always want to have at least some data in there (as alumb mentions: user types and administrators). I'll also often want a large collection of generated test data for performance measurements. How would I apply version control to my database? A: It's simple. * *When the base project is ready then you must create full database script. This script is commited to SVN. It is first version. *After that all developers creates change scripts (ALTER..., new tables, sprocs, etc). *When you need current version then you should execute all new change scripts. *When app is released to production then you go back to 1 (but then it will be successive version of course). Nant will help you to execute those change scripts. :) And remember. Everything works fine when there is discipline. Every time when database change is commited then corresponding functions in code are commited too. A: If you have a small database and you want to version the entire thing, this batch script might help. It detaches, compresses, and checks a MSSQL database MDF file in to Subversion. If you mostly want to version your schema and just have a small amount of reference data, you can possibly use SubSonic Migrations to handle that. The benefit there is that you can easily migrate up or down to any specific version. A: Because our app has to work across multiple RDBMSs, we store our schema definition in version control using the database-neutral Torque format (XML). We also version-control the reference data for our database in XML format as follows (where "Relationship" is one of the reference tables): <Relationship RelationshipID="1" InternalName="Manager"/> <Relationship RelationshipID="2" InternalName="Delegate"/> etc. We then use home-grown tools to generate the schema upgrade and reference data upgrade scripts that are required to go from version X of the database to version X + 1. A: We don't store the database schema, we store the changes to the database. What we do is store the schema changes so that we build a change script for any version of the database and apply it to our customer's databases. I wrote an database utility app that gets distributed with our main application that can read that script and know which updates need to be applied. It also has enough smarts to refresh views and stored procedures as needed. A: To make the dump to a source code control system that little bit faster, you can see which objects have changed since last time by using the version information in sysobjects. Setup: Create a table in each database you want to check incrementally to hold the version information from the last time you checked it (empty on the first run). Clear this table if you want to re-scan your whole data structure. IF ISNULL(OBJECT_ID('last_run_sysversions'), 0) <> 0 DROP TABLE last_run_sysversions CREATE TABLE last_run_sysversions ( name varchar(128), id int, base_schema_ver int, schema_ver int, type char(2) ) Normal running mode: You can take the results from this sql, and generate sql scripts for just the ones you're interested in, and put them into a source control of your choice. IF ISNULL(OBJECT_ID('tempdb.dbo.#tmp'), 0) <> 0 DROP TABLE #tmp CREATE TABLE #tmp ( name varchar(128), id int, base_schema_ver int, schema_ver int, type char(2) ) SET NOCOUNT ON -- Insert the values from the end of the last run into #tmp INSERT #tmp (name, id, base_schema_ver, schema_ver, type) SELECT name, id, base_schema_ver, schema_ver, type FROM last_run_sysversions DELETE last_run_sysversions INSERT last_run_sysversions (name, id, base_schema_ver, schema_ver, type) SELECT name, id, base_schema_ver, schema_ver, type FROM sysobjects -- This next bit lists all differences to scripts. SET NOCOUNT OFF --Renamed. SELECT 'renamed' AS ChangeType, t.name, o.name AS extra_info, 1 AS Priority FROM sysobjects o INNER JOIN #tmp t ON o.id = t.id WHERE o.name <> t.name /*COLLATE*/ AND o.type IN ('TR', 'P' ,'U' ,'V') UNION --Changed (using alter) SELECT 'changed' AS ChangeType, o.name /*COLLATE*/, 'altered' AS extra_info, 2 AS Priority FROM sysobjects o INNER JOIN #tmp t ON o.id = t.id WHERE ( o.base_schema_ver <> t.base_schema_ver OR o.schema_ver <> t.schema_ver ) AND o.type IN ('TR', 'P' ,'U' ,'V') AND o.name NOT IN ( SELECT oi.name FROM sysobjects oi INNER JOIN #tmp ti ON oi.id = ti.id WHERE oi.name <> ti.name /*COLLATE*/ AND oi.type IN ('TR', 'P' ,'U' ,'V')) UNION --Changed (actually dropped and recreated [but not renamed]) SELECT 'changed' AS ChangeType, t.name, 'dropped' AS extra_info, 2 AS Priority FROM #tmp t WHERE t.name IN ( SELECT ti.name /*COLLATE*/ FROM #tmp ti WHERE NOT EXISTS (SELECT * FROM sysobjects oi WHERE oi.id = ti.id)) AND t.name IN ( SELECT oi.name /*COLLATE*/ FROM sysobjects oi WHERE NOT EXISTS (SELECT * FROM #tmp ti WHERE oi.id = ti.id) AND oi.type IN ('TR', 'P' ,'U' ,'V')) UNION --Deleted SELECT 'deleted' AS ChangeType, t.name, '' AS extra_info, 0 AS Priority FROM #tmp t WHERE NOT EXISTS (SELECT * FROM sysobjects o WHERE o.id = t.id) AND t.name NOT IN ( SELECT oi.name /*COLLATE*/ FROM sysobjects oi WHERE NOT EXISTS (SELECT * FROM #tmp ti WHERE oi.id = ti.id) AND oi.type IN ('TR', 'P' ,'U' ,'V')) UNION --Added SELECT 'added' AS ChangeType, o.name /*COLLATE*/, '' AS extra_info, 4 AS Priority FROM sysobjects o WHERE NOT EXISTS (SELECT * FROM #tmp t WHERE o.id = t.id) AND o.type IN ('TR', 'P' ,'U' ,'V') AND o.name NOT IN ( SELECT ti.name /*COLLATE*/ FROM #tmp ti WHERE NOT EXISTS (SELECT * FROM sysobjects oi WHERE oi.id = ti.id)) ORDER BY Priority ASC Note: If you use a non-standard collation in any of your databases, you will need to replace /* COLLATE */ with your database collation. i.e. COLLATE Latin1_General_CI_AI A: I wrote this app a while ago, http://sqlschemasourcectrl.codeplex.com/ which will scan your MSFT SQL db's as often as you want and automatically dump your objects (tables, views, procs, functions, sql settings) into SVN. Works like a charm. I use it with Unfuddle (which allows me to get alerts on checkins) A: The typical solution is to dump the database as necessary and backup those files. Depending on your development platform, there may be opensource plugins available. Rolling your own code to do it is usually fairly trivial. Note: You may want to backup the database dump instead of putting it into version control. The files can get huge fast in version control, and cause your entire source control system to become slow (I'm recalling a CVS horror story at the moment). A: We needed to version our SQL database after we migrated to an x64 platform and our old version broke with the migration. We wrote a C# application which used SQLDMO to map out all of the SQL objects to a folder: Root ServerName DatabaseName Schema Objects Database Triggers* .ddltrigger.sql Functions ..function.sql Security Roles Application Roles .approle.sql Database Roles .role.sql Schemas* .schema.sql Users .user.sql Storage Full Text Catalogs* .fulltext.sql Stored Procedures ..proc.sql Synonyms* .synonym.sql Tables ..table.sql Constraints ...chkconst.sql ...defconst.sql Indexes ...index.sql Keys ...fkey.sql ...pkey.sql ...ukey.sql Triggers ...trigger.sql Types User-defined Data Types ..uddt.sql XML Schema Collections* ..xmlschema.sql Views ..view.sql Indexes ...index.sql Triggers ...trigger.sql The application would then compare the newly written version with the version stored in SVN, and if there were differences it would update SVN. We determined that running the process once a night was sufficient since we did not make that many changes to SQL. It allows us to track changes to all the objects we care about plus it allows us to rebuild our full schema in the event of a serious problem. A: I agree with ESV answer and for that exact reason I started a little project a while back to help maintain database updates in a very simple file which could then be maintained a long side out source code. It allows easy updates to developers as well as UAT and Production. The tool works on SQL Server and MySQL. Some project features: * *Allows schema changes *Allows value tree population *Allows separate test data inserts for eg. UAT *Allows option for rollback (not automated) *Maintains support for SQL server and MySQL *Has the ability to import your existing database into version control with one simple command (SQL server only ... still working on MySQL) Please check out the code for some more information. A: It's a very old question, however, many people are trying to solve this even now. All they have to do is to research about Visual Studio Database Projects. Without this, any database development looks very feeble. From code organization to deployment to versioning, it simplifies everything. A: We just started using Team Foundation Server. If your database is medium sized, then visual studio has some nice project integrations with built in compare, data compare, database refactoring tools, database testing framework, and even data generation tools. But, that model doesn't fit very large or third party databases (that encrypt objects) very well. So, what we've done is to store only our customized objects. Visual Studio / Team foundation server works very well for that. TFS Database chief arch. blog MS TFS site A: A while ago I found a VB bas module that used DMO and VSS objects to get an entire db scripted off and into VSS. I turned it into a VB Script and posted it here. You can easily take out the VSS calls and use the DMO stuff to generate all the scripts, and then call SVN from the same batch file that calls the VBScript to check them in. A: I'm also using a version in the database stored via the database extended properties family of procedures. My application has scripts for each version step (ie. move from 1.1 to 1.2). When deployed, it looks at the current version and then runs the scripts one by one until it reaches the last app version. There is no script that has the straight 'final' version, even deploy on a clean DB does the deploy via a series of upgrade steps. Now what I like to add is that I've seen two days ago a presentation on the MS campus about the new and upcoming VS DB edition. The presentation was focused specifically on this topic and I was blown out of the water. You should definitely check it out, the new facilities are focused on keeping schema definition in T-SQL scripts (CREATEs), a runtime delta engine to compare deployment schema with defined schema and doing the delta ALTERs and integration with source code integration, up to and including MSBUILD continuous integration for automated build drops. The drop will contain a new file type, the .dbschema files, that can be taken to the deployment site and a command line tool can do the actual 'deltas' and run the deployment. I have a blog entry on this topic with links to the VSDE downloads, you should check them out: http://rusanu.com/2009/05/15/version-control-and-your-database/ A: Red Gate's SQL Compare product not only allows you to do object-level comparisons, and generate change scripts from that, but it also allows you to export your database objects into a folder hierarchy organized by object type, with one [objectname].sql creation script per object in these directories. The object-type hierarchy is like this: \Functions \Security \Security\Roles \Security\Schemas \Security\Users \Stored Procedures \Tables If you dump your scripts to the same root directory after you make changes, you can use this to update your SVN repo, and keep a running history of each object individually. A: This is one of the "hard problems" surrounding development. As far as I know there are no perfect solutions. If you only need to store the database structure and not the data you can export the database as SQL queries. (in Enterprise Manager: Right click on database -> Generate SQL script. I recommend setting the "create one file per object" on the options tab) You can then commit these text files to svn and make use of svn's diff and logging functions. I have this tied together with a Batch script that takes a couple parameters and sets up the database. I also added some additional queries that enter default data like user types and the admin user. (If you want more info on this, post something and I can put the script somewhere accessible) If you need to keep all of the data as well, I recommend keeping a back up of the database and using Redgate (http://www.red-gate.com/) products to do the comparisons. They don't come cheap, but they are worth every penny. A: First, you must choose the version control system that is right for you: * *Centralized Version Control system - a standard system where users check out/check in before/after they work on files, and the files are being kept in a single central server *Distributed Version Control system - a system where the repository is being cloned, and each clone is actually the full backup of the repository, so if any server crashes, then any cloned repository can be used to restore it After choosing the right system for your needs, you'll need to setup the repository which is the core of every version control system All this is explained in the following article: http://solutioncenter.apexsql.com/sql-server-source-control-part-i-understanding-source-control-basics/ After setting up a repository, and in case of a central version control system a working folder, you can read this article. It shows how to setup source control in a development environment using: * *SQL Server Management Studio via the MSSCCI provider, *Visual Studio and SQL Server Data Tools *A 3rd party tool ApexSQL Source Control A: In my experience the solution is twofold: * *You need to handle changes to the development database that are done by multiple developers during development. *You need to handle database upgrades in customers sites. In order to handle #1 you'll need a strong database diff/merge tool. The best tool should be able to perform automatic merge as much as possible while allowing you to resolve unhandled conflicts manually. The perfect tool should handle merge operations by using a 3-way merge algorithm that brings into account the changes that were made in the THEIRS database and the MINE database, relative to the BASE database. I wrote a commercial tool that provides manual merge support for SQLite databases and I'm currently adding support for 3-way merge algorithm for SQLite. Check it out at http://www.sqlitecompare.com In order to handle #2 you will need an upgrade framework in place. The basic idea is to develop an automatic upgrade framework that knows how to upgrade from an existing SQL schema to the newer SQL schema and can build an upgrade path for every existing DB installation. Check out my article on the subject in http://www.codeproject.com/KB/database/sqlite_upgrade.aspx to get a general idea of what I'm talking about. Good Luck Liron Levi A: Check out DBGhost http://www.innovartis.co.uk/. I have used in an automated fashion for 2 years now and it works great. It allows our DB builds to happen much like a Java or C build happens, except for the database. You know what I mean. A: Here at Red Gate we offer a tool, SQL Source Control, which uses SQL Compare technology to link your database with a TFS or SVN repository. This tool integrates into SSMS and lets you work as you would normally, except it now lets you commit the objects. For a migrations-based approach (more suited for automated deployments), we offer SQL Change Automation (formerly called ReadyRoll), which creates and manages a set of incremental scripts as a Visual Studio project. In SQL Source Control it is possible to specify static data tables. These are stored in source control as INSERT statements. If you're talking about test data, we'd recommend that you either generate test data with a tool or via a post-deployment script you define, or you simply restore a production backup to the dev environment. A: You might want to look at Liquibase (http://www.liquibase.org/). Even if you don't use the tool itself it handles the concepts of database change management or refactoring pretty well. A: +1 for everyone who's recommended the RedGate tools, with an additional recommendation and a caveat. SqlCompare also has a decently documented API: so you can, for instance, write a console app which syncs your source controlled scripts folder with a CI integration testing database on checkin, so that when someone checks in a change to the schema from their scripts folder it's automatically deployed along with the matching application code change. This helps close the gap with developers who are forgetful about propagating changes in their local db up to a shared development DB (about half of us, I think :) ). A caveat is that with a scripted solution or otherwise, the RedGate tools are sufficiently smooth that it's easy to forget about SQL realities underlying the abstraction. If you rename all the columns in a table, SqlCompare has no way to map the old columns to the new columns and will drop all the data in the table. It will generate warnings but I've seen people click past that. There's a general point here worth making, I think, that you can only automate DB versioning and upgrade so far - the abstractions are very leaky. A: I would suggest using comparison tools to improvise a version control system for your database. Two good alternatives are xSQL Schema Compare and xSQL Data Compare. Now, if your goal is to have only the database's schema under version control you can simply use xSQL Schema Compare to generate xSQL Snapshots of the schema and add these files in your version control. Then, to revert or update to a specific version, just compare the current version of the database with the snapshot for the destination version. Also, if you want to have the data under version control as well, you can use xSQL Data Compare to generate change scripts for you database and add the .sql files in your version control. You could then execute these scripts to revert / update to any version you want. Keep in mind that for the 'revert' functionality you need to generate change scripts that, when executed, will make Version 3 the same as Version 2 and for the 'update' functionality, you need to generate change scripts that do the opposite. Lastly, with some basic batch programming skills you can automate the whole process by using the command line versions of xSQL Schema Compare and xSQL Data Compare Disclaimer: I'm affiliated to xSQL. A: Martin Fowler wrote my favorite article on the subject, http://martinfowler.com/articles/evodb.html. I choose not to put schema dumps in under version control as alumb and others suggest because I want an easy way to upgrade my production database. For a web application where I'll have a single production database instance, I use two techniques: Database Upgrade Scripts A sequence database upgrade scripts that contain the DDL necessary to move the schema from version N to N+1. (These go in your version control system.) A _version_history_ table, something like create table VersionHistory ( Version int primary key, UpgradeStart datetime not null, UpgradeEnd datetime ); gets a new entry every time an upgrade script runs which corresponds to the new version. This ensures that it's easy to see what version of the database schema exists and that database upgrade scripts are run only once. Again, these are not database dumps. Rather, each script represents the changes necessary to move from one version to the next. They're the script that you apply to your production database to "upgrade" it. Developer Sandbox Synchronization * *A script to backup, sanitize, and shrink a production database. Run this after each upgrade to the production DB. *A script to restore (and tweak, if necessary) the backup on a developer's workstation. Each developer runs this script after each upgrade to the production DB. A caveat: My automated tests run against a schema-correct but empty database, so this advice will not perfectly suit your needs. A: With VS 2010, use the Database project. * *Script out your database *Make changes to scripts or directly on your db server *Sync up using Data > Schema Compare Makes a perfect DB versioning solution, and makes syncing DB's a breeze. A: We use DBGhost to manage our SQL database. Then you put your scripts to build a new database in your version control, and it'll either build a new database, or upgrade any existing database to the schema in version control. That way you don't have to worry about creating change scripts (although you can still do that, if for example you want to change the data type of a column and need to convert data). A: It is a good approach to save database scripts into version control with change scripts so that you can upgrade any one database you have. Also you might want to save schemas for different versions so that you can create a full database without having to apply all the change scripts. Handling the scripts should be automated so that you don't have to do manual work. I think its important to have a separate database for every developer and not use a shared database. That way the developers can create test cases and development phases independently from other developers. The automating tool should have means for handling database metadata, which tells what databases are in what state of development and which tables contain version controllable data and so on. A: You could also look at a migrations solution. These allow you to specify your database schema in C# code, and roll your database version up and down using MSBuild. I'm currently using DbUp, and it's been working well. A: You didn't mention any specifics about your target environment or constraints, so this may not be entirely applicable... but if you're looking for a way to effectively track an evolving DB schema and aren't adverse to the idea of using Ruby, ActiveRecord's migrations are right up your alley. Migrations programatically define database transformations using a Ruby DSL; each transformation can be applied or (usually) rolled back, allowing you to jump to a different version of your DB schema at any given point in time. The file defining these transformations can be checked into version control like any other piece of source code. Because migrations are a part of ActiveRecord, they typically find use in full-stack Rails apps; however, you can use ActiveRecord independent of Rails with minimal effort. See here for a more detailed treatment of using AR's migrations outside of Rails. A: Every database should be under source-code control. What is lacking is a tool to automatically script all database objects - and "configuration data" - to file, which then can be added to any source control system. If you are using SQL Server, then my solution is here : http://dbsourcetools.codeplex.com/ . Have fun. - Nathan. A: An alternative to version controlling your database is to use a version-controlled database, of which there are now several. https://www.dolthub.com/blog/2021-09-17-database-version-control/ These products don't apply version control on top of another type of database -- they are their own database engines that support version control operations. So you need to migrate to them or start building on them in the first place. I write one of them, DoltDB, which combines the interfaces of MySQL and Git. Check it out here: https://github.com/dolthub/dolt
{ "language": "en", "url": "https://stackoverflow.com/questions/173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "337" }
Q: How do I print an HTML document from a web service? I want to print HTML from a C# web service. The web browser control is overkill, and does not function well in a service environment, nor does it function well on a system with very tight security constraints. Is there any sort of free .NET library that will support the printing of a basic HTML page? Here is the code I have so far, which does not run properly. public void PrintThing(string document) { if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA) { Thread thread = new Thread((ThreadStart) delegate { PrintDocument(document); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); } else { PrintDocument(document); } } protected void PrintDocument(string document) { WebBrowser browser = new WebBrowser(); browser.DocumentText = document; while (browser.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } browser.Print(); } This works fine when called from UI-type threads, but nothing happens when called from a service-type thread. Changing Print() to ShowPrintPreviewDialog() yields the following IE script error: Error: dialogArguments.___IE_PrintType is null or not an object. URL: res://ieframe.dll/preview.dlg And a small empty print preview dialog appears. A: I know that Visual Studio itself (at least in 2003 version) references the IE dll directly to render the "Design View". It may be worth looking into that. Otherwise, I can't think of anything beyond the Web Browser control. A: Easy! Split your problem into two simpler parts: * *render the HTML to PDF *print the PDF (SumatraPDF) * *-print-to-default $file.pdf prints a PDF file on a default printer *-print-to $printer_name $file.pdf prints a PDF on a given printer A: If you've got it in the budget (~$3000), check out PrinceXML. It will render HTML into a PDF, functions well in a service environment, and supports advanced features such as not breaking a page in the middle of a table cell (which a lot of browsers don't currently support). A: You can print from the command line using the following: rundll32.exe %WINDIR%\System32\mshtml.dll,PrintHTML "%1" Where %1 is the file path of the HTML file to be printed. If you don't need to print from memory (or can afford to write to the disk in a temp file) you can use: using (Process printProcess = new Process()) { string systemPath = Environment.GetFolderPath(Environment.SpecialFolder.System); printProcess.StartInfo.FileName = systemPath + @"\rundll32.exe"; printProcess.StartInfo.Arguments = systemPath + @"\mshtml.dll,PrintHTML """ + fileToPrint + @""""; printProcess.Start(); } N.B. This only works on Windows 2000 and above I think. A: I tool that works very well for me is HiQPdf. https://www.hiqpdf.com/ The price is reasonable (starts at $245) and it can render HTML to a PDF and also manage the printing of the PDF files directly. A: Maybe this will help. http://www.codeproject.com/KB/printing/printhml.aspx Also not sure what thread you are trying to access the browser control from, but it needs to be STA Note - The project referred to in the link does allow you to navigate to a page and perform a print without showing the print dialog. A: I don't know the specific tools, but there are some utilities that record / replay clicks. In other words, you could automate the "click" on the print dialog. (I know this is a hack, but when all else fails...)
{ "language": "en", "url": "https://stackoverflow.com/questions/174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Annotating YouTube videos programmatically I want to be able to display a normal YouTube video with overlaid annotations, consisting of coloured rectangles for each frame. The only requirement is that this should be done programmatically. YouTube has annotations now, but require you to use their front end to create them by hand. I want to be able to generate them. What's the best way of doing this? Some ideas: * *Build your own Flash player (ew?) *Somehow draw over the YouTube Flash player. Will this work? *Reverse engineer & hijack YouTube's annotation system. Either messing with the local files or redirecting its attempt to download the annotations. (using Greasemonkey? Firefox plugin?) Idea that doesn't count: download the video A: Joe Berkovitz has written a sample application called ReviewTube which "Allows users to create time-based subtitles for any YouTube video, a la closed captioning. These captions become publicly accessible, and visitors to the site can browse the set of videos with captions. Think of it as a “subtitle graffiti wall” for YouTube!" The app is the example used to demonstrate the MVCS framework/approach for building Flex applications. http://www.joeberkovitz.com/blog/reviewtube/ Not sure if this will help with the colored rectangles and whatnot, but it's a decent place to start. A: The player itself has a Javascript API that might be useful for syncing the video if you choose to make your own annotation-thingamajig. A: YouTube provides an ActionScript API. Using this, you could load the videos into Flash using their API and then have your Flash app create the annotations on a layer above the video. Or, alternatively, if you want to stay away from creating something in Flash, using YouTube's JavaScript API you could draw HTML DIVs over the YouTube player on your web page. Just remember when you embed the player to have WMODE="transparent" in the params list. So using the example from YouTube: <script type="text/javascript"> var params = { allowScriptAccess: "always" }; var atts = { id: "myytplayer", wmode: "transparent" }; swfobject.embedSWF("http://www.youtube.com/v/VIDEO_ID&enablejsapi=1&playerapiid=ytplayer", "ytapiplayer", "425", "356", "8", null, null, params, atts); </script> And then you should be able to draw your annotations over the YouTube movie using CSS/DHTML.
{ "language": "en", "url": "https://stackoverflow.com/questions/175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: error_log per Virtual Host? On one Linux Server running Apache and PHP 5, we have multiple Virtual Hosts with separate log files. We cannot seem to separate the php error_log between virtual hosts. Overriding this setting in the <Location> of the httpd.conf does not seem to do anything. Is there a way to have separate php error_logs for each Virtual Host? A: The default behaviour for error_log() is to output to the Apache error log. If this isn't happening check your php.ini settings for the error_log directive. Leave it unset to use the Apache log file for the current vhost. A: To set the Apache (not the PHP) log, the easiest way to do this would be to do: <VirtualHost IP:Port> # Stuff, # More Stuff, ErrorLog /path/where/you/want/the/error.log </VirtualHost> If there is no leading "/" it is assumed to be relative. Apache Error Log Page A: Try adding the php_value error_log '/path/to/php_error_log to your VirtualHost configuration. A: Don't set error_log to where your syslog stuff goes, eg /var/log/apache2, because they errors will get intercepted by ErrorLog. Instead, create a subdir in your project folder for logs and do php_value error_log "/path/to/project/logs". This goes for both .htaccess files and vhosts. Also make sure you put php_flag log_errors on A: php_value error_log "/var/log/httpd/vhost_php_error_log" It works for me, but I have to change the permission of the log file. or Apache will write the log to the its error_log. A: Yes, you can try, php_value error_log "/var/log/php_log" in .htaccess or you can have users use ini_set() in the beginning of their scripts if they want to have logging. Another option would be to enable scripts to default to the php.ini in the folder with the script, then go to the user/host's root folder, then to the server's root, or something similar. This would allow hosts to add their own php.ini values and their own error_log locations. A: My Apache had something like this in httpd.conf. Just change the ErrorLog and CustomLog settings <VirtualHost myvhost:80> ServerAdmin webmaster@dummy-host.example.com DocumentRoot /opt/web ServerName myvhost ErrorLog logs/myvhost-error_log CustomLog logs/myvhost-access_log common </VirtualHost> A: You can try: <VirtualHost myvhost:80> php_value error_log "/var/log/httpd/vhost_php_error_log" </Virtual Host> But I'm not sure if it is going to work. I tried on my sites with no success. A: Create Simple VirtualHost: example hostname:- thecontrolist.localhost * *C:\Windows\System32\drivers\etc 127.0.0.1 thecontrolist.localhost in hosts file *C:\xampp\apache\conf\extra\httpd-vhosts.conf <VirtualHost *> ServerName thecontrolist.localhost ServerAlias thecontrolist.localhost DocumentRoot "/xampp/htdocs/thecontrolist" <Directory "/xampp/htdocs/thecontrolist"> Options +Indexes +Includes +FollowSymLinks +MultiViews AllowOverride All Require local </Directory> </VirtualHost> *Don't Forget to restart Your apache. for more check this link A: I usually just specify this in an .htaccess file or the vhost.conf on the domain I'm working on. Add this to one of these files: php_admin_value error_log "/var/www/vhosts/example.com/error_log" A: If somebody comes looking it should look like this: <VirtualHost *:80> ServerName example.com DocumentRoot /var/www/domains/example.com/html ErrorLog /var/www/domains/example.com/apache.error.log CustomLog /var/www/domains/example.com/apache.access.log common php_flag log_errors on php_flag display_errors on php_value error_reporting 2147483647 php_value error_log /var/www/domains/example.com/php.error.log </VirtualHost> This is for development only since display_error is turned on. You will notice that the Apache error log is separate from the PHP error log. The good stuff is in php.error.log. Take a look here for the error_reporting key http://www.php.net/manual/en/errorfunc.configuration.php#ini.error-reporting
{ "language": "en", "url": "https://stackoverflow.com/questions/176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "124" }
Q: Function for creating color wheels This is something I've pseudo-solved many times and have never quite found a solution for. The problem is to come up with a way to generate N colors, that are as distinguishable as possible where N is a parameter. A: Isn't it also a factor which order you set up the colors? Like if you use Dillie-Os idea you need to mix the colors as much as possible. 0 64 128 256 is from one to the next. but 0 256 64 128 in a wheel would be more "apart" Does this make sense? A: I've read somewhere the human eye can't distinguish between less than 4 values apart. so This is something to keep in mind. The following algorithm does not compensate for this. I'm not sure this is exactly what you want, but this is one way to randomly generate non-repeating color values: (beware, inconsistent pseudo-code ahead) //colors entered as 0-255 [R, G, B] colors = []; //holds final colors to be used rand = new Random(); //assumes n is less than 16,777,216 randomGen(int n){ while (len(colors) < n){ //generate a random number between 0,255 for each color newRed = rand.next(256); newGreen = rand.next(256); newBlue = rand.next(256); temp = [newRed, newGreen, newBlue]; //only adds new colors to the array if temp not in colors { colors.append(temp); } } } One way you could optimize this for better visibility would be to compare the distance between each new color and all the colors in the array: for item in color{ itemSq = (item[0]^2 + item[1]^2 + item[2]^2])^(.5); tempSq = (temp[0]^2 + temp[1]^2 + temp[2]^2])^(.5); dist = itemSq - tempSq; dist = abs(dist); } //NUMBER can be your chosen distance apart. if dist < NUMBER and temp not in colors { colors.append(temp); } But this approach would significantly slow down your algorithm. Another way would be to scrap the randomness and systematically go through every 4 values and add a color to an array in the above example. A: function random_color($i = null, $n = 10, $sat = .5, $br = .7) { $i = is_null($i) ? mt_rand(0,$n) : $i; $rgb = hsv2rgb(array($i*(360/$n), $sat, $br)); for ($i=0 ; $i<=2 ; $i++) $rgb[$i] = dechex(ceil($rgb[$i])); return implode('', $rgb); } function hsv2rgb($c) { list($h,$s,$v)=$c; if ($s==0) return array($v,$v,$v); else { $h=($h%=360)/60; $i=floor($h); $f=$h-$i; $q[0]=$q[1]=$v*(1-$s); $q[2]=$v*(1-$s*(1-$f)); $q[3]=$q[4]=$v; $q[5]=$v*(1-$s*$f); return(array($q[($i+4)%6]*255,$q[($i+2)%6]*255,$q[$i%6]*255)); //[1] } } So just call the random_color() function where $i identifies the color, $n the number of possible colors, $sat the saturation and $br the brightness. A: To achieve "most distinguishable" we need to use a perceptual color space like Lab (or any other perceptually linear color space) other than RGB. Also, we can quantize this space to reduce the size of the space. Generate the full 3D space with all possible quantized entries and run the K-means algorithm with K=N. The resulting centers/"means" should be approximately most distinguishable from each other. A: My first thought on this is "how to generate N vectors in a space that maximize distance from each other." You can see that the RGB (or any other scale you use that forms a basis in color space) are just vectors. Take a look at Random Point Picking. Once you have a set of vectors that are maximized apart, you can save them in a hash table or something for later, and just perform random rotations on them to get all the colors you desire that are maximally apart from each other! Thinking about this problem more, it would be better to map the colors in a linear manner, possibly (0,0,0) → (255,255,255) lexicographically, and then distribute them evenly. I really don't know how well this will work, but it should since, let us say: n = 10 we know we have 16777216 colors (256^3). We can use Buckles Algorithm 515 to find the lexicographically indexed color.. You'll probably have to edit the algorithm to avoid overflow and probably add some minor speed improvements. A: It would be best to find colors maximally distant in a "perceptually uniform" colorspace, e.g. CIELAB (using Euclidean distance between L*, a*, b* coordinates as your distance metric) and then converting to the colorspace of your choice. Perceptual uniformity is achieved by tweaking the colorspace to approximate the non-linearities in the human visual system. A: Some related resources: ColorBrewer - Sets of colours designed to be maximally distinguishable for use on maps. Escaping RGBland: Selecting Colors for Statistical Graphics - A technical report describing a set of algorithms for generating good (i.e. maximally distinguishable) colour sets in the hcl colour space. A: Here is some code to allocate RGB colors evenly around a HSL color wheel of specified luminosity. class cColorPicker { public: void Pick( vector<DWORD>&v_picked_cols, int count, int bright = 50 ); private: DWORD HSL2RGB( int h, int s, int v ); unsigned char ToRGB1(float rm1, float rm2, float rh); }; /** Evenly allocate RGB colors around HSL color wheel @param[out] v_picked_cols a vector of colors in RGB format @param[in] count number of colors required @param[in] bright 0 is all black, 100 is all white, defaults to 50 based on Fig 3 of http://epub.wu-wien.ac.at/dyn/virlib/wp/eng/mediate/epub-wu-01_c87.pdf?ID=epub-wu-01_c87 */ void cColorPicker::Pick( vector<DWORD>&v_picked_cols, int count, int bright ) { v_picked_cols.clear(); for( int k_hue = 0; k_hue < 360; k_hue += 360/count ) v_picked_cols.push_back( HSL2RGB( k_hue, 100, bright ) ); } /** Convert HSL to RGB based on http://www.codeguru.com/code/legacy/gdi/colorapp_src.zip */ DWORD cColorPicker::HSL2RGB( int h, int s, int l ) { DWORD ret = 0; unsigned char r,g,b; float saturation = s / 100.0f; float luminance = l / 100.f; float hue = (float)h; if (saturation == 0.0) { r = g = b = unsigned char(luminance * 255.0); } else { float rm1, rm2; if (luminance <= 0.5f) rm2 = luminance + luminance * saturation; else rm2 = luminance + saturation - luminance * saturation; rm1 = 2.0f * luminance - rm2; r = ToRGB1(rm1, rm2, hue + 120.0f); g = ToRGB1(rm1, rm2, hue); b = ToRGB1(rm1, rm2, hue - 120.0f); } ret = ((DWORD)(((BYTE)(r)|((WORD)((BYTE)(g))<<8))|(((DWORD)(BYTE)(b))<<16))); return ret; } unsigned char cColorPicker::ToRGB1(float rm1, float rm2, float rh) { if (rh > 360.0f) rh -= 360.0f; else if (rh < 0.0f) rh += 360.0f; if (rh < 60.0f) rm1 = rm1 + (rm2 - rm1) * rh / 60.0f; else if (rh < 180.0f) rm1 = rm2; else if (rh < 240.0f) rm1 = rm1 + (rm2 - rm1) * (240.0f - rh) / 60.0f; return static_cast<unsigned char>(rm1 * 255); } int _tmain(int argc, _TCHAR* argv[]) { vector<DWORD> myCols; cColorPicker colpick; colpick.Pick( myCols, 20 ); for( int k = 0; k < (int)myCols.size(); k++ ) printf("%d: %d %d %d\n", k+1, ( myCols[k] & 0xFF0000 ) >>16, ( myCols[k] & 0xFF00 ) >>8, ( myCols[k] & 0xFF ) ); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: Floating Point Number parsing: Is there a Catch All algorithm? One of the fun parts of multi-cultural programming is number formats. * *Americans use 10,000.50 *Germans use 10.000,50 *French use 10 000,50 My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10. Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator. The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser. Does the .net Framework contain some mythical black magic floating point parser that is better than Double.(Try)Parse() in trying to auto-detect the number format? A: I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again. A: I don't know the ASP.NET side of the problem but .NET has a pretty powerful class: System.Globalization.CultureInfo. You can use the following code to parse a string containing a double value: double d = double.Parse("100.20", CultureInfo.CurrentCulture); // -- OR -- double d = double.Parse("100.20", CultureInfo.CurrentUICulture); If ASP.NET somehow (i.e. using HTTP Request headers) passes current user's CultureInfo to either CultureInfo.CurrentCulture or CultureInfo.CurrentUICulture, these will work fine. A: You can't please everyone. If I enter ten as 10.000, and someone enters ten thousand as 10.000, you cannot handle that without some knowledge of the culture of the input. Detect the culture somehow (browser, system setting - what is the use case? ASP? Internal app, or open to the world?), or provide an example of the expected formatting, and use the most lenient parser you can. Probably something like: double d = Double.Parse("5,000.00", NumberStyles.Any, CultureInfo.InvariantCulture); A: The difference between 12.345 in French and English is a factor of 1000. If you supply an expected range where max < 1000*min, you can easily guess. Take for example the height of a person (including babies and children) in mm. By using a range of 200-3000, an input of 1.800 or 1,800 can unambiguously be interpreted as 1 meter and 80 centimeters, whereas an input of 912.300 or 912,300 can unambiguously be interpreted as 91 centimeters and 2.3 millimeters.
{ "language": "en", "url": "https://stackoverflow.com/questions/192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: Upgrading SQL Server 6.5 Yes, I know. The existence of a running copy of SQL Server 6.5 in 2008 is absurd. That stipulated, what is the best way to migrate from 6.5 to 2005? Is there any direct path? Most of the documentation I've found deals with upgrading 6.5 to 7. Should I forget about the native SQL Server upgrade utilities, script out all of the objects and data, and try to recreate from scratch? I was going to attempt the upgrade this weekend, but server issues pushed it back till next. So, any ideas would be welcomed during the course of the week. Update. This is how I ended up doing it: * *Back up the database in question and Master on 6.5. *Execute SQL Server 2000's instcat.sql against 6.5's Master. This allows SQL Server 2000's OLEDB provider to connect to 6.5. *Use SQL Server 2000's standalone "Import and Export Data" to create a DTS package, using OLEDB to connect to 6.5. This successfully copied all 6.5's tables to a new 2005 database (also using OLEDB). *Use 6.5's Enterprise Manager to script out all of the database's indexes and triggers to a .sql file. *Execute that .sql file against the new copy of the database, in 2005's Management Studio. *Use 6.5's Enterprise Manager to script out all of the stored procedures. *Execute that .sql file against the 2005 database. Several dozen sprocs had issues making them incompatible with 2005. Mainly non-ANSI joins and quoted identifier issues. *Corrected all of those issues and re-executed the .sql file. *Recreated the 6.5's logins in 2005 and gave them appropriate permissions. There was a bit of rinse/repeat when correcting the stored procedures (there were hundreds of them to correct), but the upgrade went great otherwise. Being able to use Management Studio instead of Query Analyzer and Enterprise Manager 6.5 is such an amazing difference. A few report queries that took 20-30 seconds on the 6.5 database are now running in 1-2 seconds, without any modification, new indexes, or anything. I didn't expect that kind of immediate improvement. A: You can upgrade 6.5 to SQL Server 2000. You may have an easier time getting a hold of SQL Server or the 2000 version of the MSDE. Microsoft has a page on going from 6.5 to 2000. Once you have the database in 2000 format, SQL Server 2005 will have no trouble upgrading it to the 2005 format. If you don't have SQL Server 2000, you can download the MSDE 2000 version directly from Microsoft. A: I am by no means authoritative, but I believe the only supported path is from 6.5 to 7. Certainly that would be the most sane route, then I believe you can migrate from 7 directly to 2005 pretty painlessly. As for scripting out all the objects - I would advise against it as you will inevitably miss something (unless your database is truly trivial). A: If you can find a professional or some other super-enterprise version of Visual Studio 6.0 - it came with a copy of MSDE (Basically the predecessor to SQL Express). I believe MSDE 2000 is still available as a free download from Microsoft, but I don't know if you can migrate directly from 6.5 to 2000. I think in concept, you won't likely face any danger. Years of practice however tell me that you will always miss some object, permission, or other database item that won't manifest itself immediately. If you can script out the entire dump, the better. As you will be less likely to miss something - and if you do miss something, it can be easily added to the script and fixed. I would avoid any manual steps (other than hitting the enter key once) like the plague. A: Hey, I'm still stuck in that camp too. The third party application we have to support is FINALLY going to 2K5, so we're almost out of the wood. But I feel your pain 8^D That said, from everything I heard from our DBA, the key is to convert the database to 8.0 format first, and then go to 2005. I believe they used the built in migration/upgrade tools for this. There are some big steps between 6.5 and 8.0 that are better solved there than going from 6.5 to 2005 directly. Your BIGGEST pain, if you didn't know already, is that DTS is gone in favor of SSIS. There is a shell type module that will run your existing DTS packages, but you're going to want to manually recreate them all in SSIS. Ease of this will depend on the complexity of the packages themselves, but I've done a few at work so far and they've been pretty smooth.
{ "language": "en", "url": "https://stackoverflow.com/questions/194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: What's the best way to generate a tag cloud from an array using h1 through h6 for sizing? I have the following arrays: $artist = array("the roots", "michael jackson", "billy idol", "more", "and more", "and_YET_MORE"); $count = array(5, 3, 9, 1, 1, 3); I want to generate a tag cloud that will have artists with a higher number in $count enclosed in h6 tags and the lowest enclosed h1 tags. A: @Ryan That's correct but it actually makes the tags with the least number, larger. This code has been tested: $artist = array("the roots","michael jackson","billy idol","more","and more","and_YET_MORE"); $count = array(5,3,9,1,1,3); $highest = max($count); for ($x = 0; $x < count($artist); $x++) { $normalized = ($highest - $count[$x]+1) / $highest; $heading = ceil($normalized * 6); // 6 heading types echo "<h$heading>{$artist[$x]}</h$heading>"; } A: You will want to add a logarithmic function to it too. (taken from tagadelic, my Drupal module to create tag clouds http://drupal.org/project/tagadelic): db_query('SELECT COUNT(*) AS count, id, name FROM ... ORDER BY count DESC'); $steps = 6; $tags = array(); $min = 1e9; $max = -1e9; while ($tag = db_fetch_object($result)) { $tag->number_of_posts = $tag->count; #sets the amount of items a certain tag has attached to it $tag->count = log($tag->count); $min = min($min, $tag->count); $max = max($max, $tag->count); $tags[$tag->tid] = $tag; } // Note: we need to ensure the range is slightly too large to make sure even // the largest element is rounded down. $range = max(.01, $max - $min) * 1.0001; foreach ($tags as $key => $value) { $tags[$key]->weight = 1 + floor($steps * ($value->count - $min) / $range); } Then in your view or template: foreach ($tags as $tag) { $output .= "<h$tag->weight>$tag->name</h$tag->weight>" } A: Off the top of my head... $artist = array("the roots","michael jackson","billy idol","more","and more","and_YET_MORE"); $count = array(5,3,9,1,1,3); $highest = max($count); for (int $x = 0; $x < count($artist); $x++) { $normalized = $count[$x] / $highest; $heading = ceil($normalized * 6); // 6 heading types echo "<h".$heading.">".$artist[$x]."</h".$heading.">"; } A: This method is for SQL/PostgreSQL fanatics. It does the entire job in the database, and it prints text with "slugified" link. It uses Doctrine ORM just for the sql call, I'm not using objects. Suppose we have 10 sizes: public function getAllForTagCloud($fontSizes = 10) { $sql = sprintf("SELECT count(tag) as tagcount,tag,slug, floor((count(*) * %d )/(select max(t) from (select count(tag) as t from magazine_tag group by tag) t)::numeric(6,2)) as ranking from magazine_tag mt group by tag,slug", $fontSizes); $q = Doctrine_Manager::getInstance()->getCurrentConnection(); return $q->execute($sql); } then you print them with some CSS class, from .tagranking10 (the best) to .tagranking1 (the worst): <?php foreach ($allTags as $tag): ?> <span class="<?php echo 'tagrank'.$tag['ranking'] ?>"> <?php echo sprintf('<a rel="tag" href="/search/by/tag/%s">%s</a>', $tag['slug'], $tag['tag'] ); ?> </span> <?php endforeach; ?> and this is the CSS: /* put your size of choice */ .tagrank1{font-size: 0.3em;} .tagrank2{font-size: 0.4em;} .tagrank3{font-size: 0.5em;} /* go on till tagrank10 */ This method displays all tags. If you have a lot of them, you probably don't want your tag cloud to become a tag storm. In that case you would append an HAVING TO clause to your SQL query: -- minimum tag count is 8 -- HAVING count(tag) > 7 That's all A: I know it's a very old post, still I'm posting my view as it may help someone in future. Here is the tagcloud I used in my website: http://www.vbausefulcodes.in/ <?php $input= array("vba","macros","excel","outlook","powerpoint","access","database","interview questions","sendkeys","word","excel projects","visual basic projects","excel vba","macro","excel visual basic","tutorial","programming","learn macros","vba examples"); $rand_tags = array_rand($input, 5); for ($x = 0; $x <= 4; $x++) { $size = rand ( 1 , 4 ); echo "<font size='$size'>" . $input[$rand_tags[$x]] . " " . "</font>"; } echo "<br>"; $rand_tags = array_rand($input, 7); for ($x = 0; $x <= 6; $x++) { $size = rand ( 1 , 4 ); echo "<font size='$size'>" . $input[$rand_tags[$x]] . " " . "</font>"; } echo "<br>"; $rand_tags = array_rand($input, 5); for ($x = 0; $x <= 4; $x++) { $size = rand ( 1 , 4 ); echo "<font size='$size'>" . $input[$rand_tags[$x]] . " " . "</font>"; } ?> A: Perhaps this is a little academic and off topic but hX tags are probably not the best choice for a tag cloud for reasons of document structure and all that sort of thing. Maybe spans or an ol with appropriate class attributes (plus some CSS)? A: As a helper in Rails: def tag_cloud (strings, counts) max = counts.max strings.map { |a| "<span style='font-size:#{((counts[strings.index(a)] * 4.0)/max).ceil}em'>#{a}</span> " } end Call this from the view: <%= tag_cloud($artists, $counts) %> This outputs <span style='font-size:_em'> elements in an array that will be converted to a string in the view to ultimately render like so: <span style='font-size:3em'>the roots</span> <span style='font-size:2em'>michael jackson</span> <span style='font-size:4em'>billy idol</span> <span style='font-size:1em'>more</span> <span style='font-size:1em'>and more</span> <span style='font-size:2em'>and_YET_MORE</span> It would be better to have a class attribute and reference the classes in a style sheet as mentioned by Brendan above. Much better than using h1-h6 semantically and there's less style baggage with a <span>. A: Have used this snippet for a while, credit is prism-perfect.net. Doesn't use H tags though <div id="tags"> <div class="title">Popular Searches</div> <?php // Snippet taken from [prism-perfect.net] include "/path/to/public_html/search/settings/database.php"; include "/path/to/public_html/search/settings/conf.php"; $query = "SELECT query AS tag, COUNT(*) AS quantity FROM sphider_query_log WHERE results > 0 GROUP BY query ORDER BY query ASC LIMIT 10"; $result = mysql_query($query) or die(mysql_error()); while ($row = mysql_fetch_array($result)) { $tags[$row['tag']] = $row['quantity']; } // change these font sizes if you will $max_size = 30; // max font size in % $min_size = 11; // min font size in % // get the largest and smallest array values $max_qty = max(array_values($tags)); $min_qty = min(array_values($tags)); // find the range of values $spread = $max_qty - $min_qty; if (0 == $spread) { // we don't want to divide by zero $spread = 1; } // determine the font-size increment // this is the increase per tag quantity (times used) $step = ($max_size - $min_size)/($spread); // loop through our tag array foreach ($tags as $key => $value) { // calculate CSS font-size // find the $value in excess of $min_qty // multiply by the font-size increment ($size) // and add the $min_size set above $size = $min_size + (($value - $min_qty) * $step); // uncomment if you want sizes in whole %: // $size = ceil($size); // you'll need to put the link destination in place of the /search/search.php... // (assuming your tag links to some sort of details page) echo '<a href="/search/search.php?query='.$key.'&search=1" style="font-size: '.$size.'px"'; // perhaps adjust this title attribute for the things that are tagged echo ' title="'.$value.' things tagged with '.$key.'"'; echo '>'.$key.'</a> '; // notice the space at the end of the link } ?> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: Register Windows program with the mailto protocol programmatically How do I make it so mailto: links will be registered with my program? How would I then handle that event in my program? Most of the solutions I found from a quick Google search are how to do this manually, but I need to do this automatically for users of my program if they click a button, such as "set as default email client". #Edit: Removed reference to Delphi, because the answer is independent of your language. A: @Dillie-O: Your answer put me in the right direction (I should have expected it to just be a registry change) and I got this working. But I'm going to mark this as the answer because I'm going to put some additional information that I found while working on this. The solution to this question really doesn't depend on what programming language you're using, as long as there's some way to modify Windows registry settings. Finally, here's the answer: * *To associate a program with the mailto protocol for all users on a computer, change the HKEY_CLASSES_ROOT\mailto\shell\open\command Default value to: "Your program's executable" "%1" *To associate a program with the mailto protocol for the current user, change the HKEY_CURRENT_USER\Software\Classes\mailto\shell\open\command Default value to: "Your program's executable" "%1" The %1 will be replaced with the entire mailto URL. For example, given the link: <a href="mailto:user@example.com">Email me</a> The following will be executed: "Your program's executable" "mailto:user@example.com" Update (via comment by shellscape): As of Windows 8, this method no longer works as expected. Win8 enforces the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\Shell\Associati‌​ons\URLAssociations\‌​MAILTO\UserChoice for which the ProgID of the selected app is hashed and can't be forged. It's a royal PITA. A: From what I've seen, there are a few registry keys that set the default mail client. One of them is: System Key: [HKEY_CLASSES_ROOT\mailto\shell\open\command] Value Name: (Default) Data Type: REG_SZ (String Value) Value Data: Mail program command-line. I'm not familiar with Delphi 7, but I'm sure there are some registry editing libraries there that you could use to modify this value. Some places list more than this key, others just this key, so you may need to test a little bit to find the proper one(s).
{ "language": "en", "url": "https://stackoverflow.com/questions/231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: SQL Server 2005 implementation of MySQL REPLACE INTO? MySQL has this incredibly useful yet proprietary REPLACE INTO SQL Command. Can this easily be emulated in SQL Server 2005? Starting a new Transaction, doing a Select() and then either UPDATE or INSERT and COMMIT is always a little bit of a pain, especially when doing it in the application and therefore always keeping 2 versions of the statement. I wonder if there is an easy and universal way to implement such a function into SQL Server 2005? A: This is something that annoys me about MSSQL (rant on my blog). I wish MSSQL supported upsert. @Dillie-O's code is a good way in older SQL versions (+1 vote), but it still is basically two IO operations (the exists and then the update or insert) There's a slightly better way on this post, basically: --try an update update tablename set field1 = 'new value', field2 = 'different value', ... where idfield = 7 --insert if failed if @@rowcount = 0 and @@error = 0 insert into tablename ( idfield, field1, field2, ... ) values ( 7, 'value one', 'another value', ... ) This reduces it to one IO operations if it's an update, or two if an insert. MS Sql2008 introduces merge from the SQL:2003 standard: merge tablename as target using (values ('new value', 'different value')) as source (field1, field2) on target.idfield = 7 when matched then update set field1 = source.field1, field2 = source.field2, ... when not matched then insert ( idfield, field1, field2, ... ) values ( 7, source.field1, source.field2, ... ) Now it's really just one IO operation, but awful code :-( A: The functionality you're looking for is traditionally called an UPSERT. Atleast knowing what it's called might help you find what you're looking for. I don't think SQL Server 2005 has any great ways of doing this. 2008 introduces the MERGE statement that can be used to accomplish this as shown in: http://www.databasejournal.com/features/mssql/article.php/3739131 or http://blogs.conchango.com/davidportas/archive/2007/11/14/SQL-Server-2008-MERGE.aspx Merge was available in the beta of 2005, but they removed it out in the final release. A: What the upsert/merge is doing is something to the effect of... IF EXISTS (SELECT * FROM [Table] WHERE Id = X) UPDATE [Table] SET... ELSE INSERT INTO [Table] So hopefully the combination of those articles and this pseudo code can get things moving. A: I wrote a blog post about this issue. The bottom line is that if you want cheap updates and want to be safe for concurrent usage, try: update t set hitCount = hitCount + 1 where pk = @id if @@rowcount < 1 begin begin tran update t with (serializable) set hitCount = hitCount + 1 where pk = @id if @@rowcount = 0 begin insert t (pk, hitCount) values (@id,1) end commit tran end This way you have 1 operation for updates and a max of 3 operations for inserts. So, if you are generally updating, this is a safe cheap option. I would also be very careful not to use anything that is unsafe for concurrent usage. It's really easy to get primary key violations or duplicate rows in production.
{ "language": "en", "url": "https://stackoverflow.com/questions/234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: Create a SQLite database based on an XSD Data Set Does anybody know if there is a way to create an SQLite database based on an XSD DataSet? In the past, I've just used a basic SQLite manager, but I want to fuse things a bit more with my .NET development if possible. A: Perhaps you could use an XSL transformation to convert the XSD into SQL table definitions. Likewise, I can't find any prior examples of this, but I'd have thought it would be possible. I'd imagine a generic XSLT for XSD->SQL, once written, could be applied to any such scenario (and would be cross-platform too). Perhaps somebody has done this before... A: I'm sure you could write a small app that takes an XSD file and parses it into a SQL script. I've never seen code out there to do it though, but that's not saying it doesn't exist. A: I suspect in the general case this is hard; XML Schema allows for some quite bizarre type constructions. I'm not sure how you would do substitution groups, or extensions to restrictions of types for example. However it should be possible to knock something together quite quickly (especially mapping from the classes in System.Xml.Schema) that works for 90% of schemas (i.e. sequence and choice elements with a few simple data types).
{ "language": "en", "url": "https://stackoverflow.com/questions/246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Adding scripting functionality to .NET applications I have a little game written in C#. It uses a database as back-end. It's a trading card game, and I wanted to implement the function of the cards as a script. What I mean is that I essentially have an interface, ICard, which a card class implements (public class Card056: ICard) and which contains a function that is called by the game. Now, to make the thing maintainable/moddable, I would like to have the class for each card as source code in the database and essentially compile it on first use. So when I have to add/change a card, I'll just add it to the database and tell my application to refresh, without needing any assembly deployment (especially since we would be talking about 1 assembly per card which means hundreds of assemblies). Is that possible? Register a class from a source file and then instantiate it, etc. ICard Cards[current] = new MyGame.CardLibrary.Card056(); Cards[current].OnEnterPlay(ref currentGameState); The language is C# but extra bonus if it's possible to write the script in any .NET language. A: You might be able to use IronRuby for that. Otherwise I'd suggest you have a directory where you place precompiled assemblies. Then you could have a reference in the DB to the assembly and class, and use reflection to load the proper assemblies at runtime. If you really want to compile at run-time you could use the CodeDOM, then you could use reflection to load the dynamic assembly. Microsoft documentation article which might help. A: If you don't want to use the DLR you can use Boo (which has an interpreter) or you could consider the Script.NET (S#) project on CodePlex. With the Boo solution you can choose between compiled scripts or using the interpreter, and Boo makes a nice scripting language, has a flexible syntax and an extensible language via its open compiler architecture. Script.NET looks nice too, though, and you could easily extend that language as well as its an open source project and uses a very friendly Compiler Generator (Irony.net). A: You could use any of the DLR languages, which provide a way to really easily host your own scripting platform. However, you don't have to use a scripting language for this. You could use C# and compile it with the C# code provider. As long as you load it in its own AppDomain, you can load and unload it to your heart's content. A: I'd suggest using LuaInterface as it has fully implemented Lua where it appears that Nua is not complete and likely does not implement some very useful functionality (coroutines, etc). If you want to use some of the outside prepacked Lua modules, I'd suggest using something along the lines of 1.5.x as opposed to the 2.x series that builds fully managed code and cannot expose the necessary C API. A: I'm using LuaInterface1.3 + Lua 5.0 for a NET 1.1 application. The issue with Boo is that every time you parse/compile/eval your code on the fly, it creates a set of boo classes so you will get memory leaks. Lua in the other hand, does not do that, so it's very very stable and works wonderful (I can pass objects from C# to Lua and backwards). So far I haven't put it in PROD yet, but seems very promising. I did have memory leaks issues in PROD using LuaInterface + Lua 5.0, therefore I used Lua 5.2 and linked directly into C# with DllImport. The memory leaks were inside the LuaInterface library. Lua 5.2: from http://luabinaries.sourceforge.net and http://sourceforge.net/projects/luabinaries/files/5.2/Windows%20Libraries/Dynamic/lua-5.2_Win32_dll7_lib.zip/download Once I did this, all my memory leaks were gone and the application was very stable. A: Oleg Shilo's C# Script solution (at The Code Project) really is a great introduction to providing script abilities in your application. A different approach would be to consider a language that is specifically built for scripting, such as IronRuby, IronPython, or Lua. IronPython and IronRuby are both available today. For a guide to embedding IronPython read How to embed IronPython script support in your existing app in 10 easy steps. Lua is a scripting language commonly used in games. There is a Lua compiler for .NET, available from CodePlex -- http://www.codeplex.com/Nua That codebase is a great read if you want to learn about building a compiler in .NET. A different angle altogether is to try PowerShell. There are numerous examples of embedding PowerShell into an application -- here's a thorough project on the topic: Powershell Tunnel A: Yes, I thought about that, but I soon figured out that another Domain-Specific-Language (DSL) would be a bit too much. Essentially, they need to interact with my gamestate in possibly unpredictable ways. For example, a card could have a rule "When this cards enter play, all your undead minions gain +3 attack against flying enemies, except when the enemy is blessed". As trading card games are turn based, the GameState Manager will fire OnStageX events and let the cards modify other cards or the GameState in whatever way the card needs. If I try to create a DSL, I have to implement a rather large feature set and possibly constantly update it, which shifts the maintenance work to another part without actually removing it. That's why I wanted to stay with a "real" .NET language to essentially be able to just fire the event and let the card manipulate the gamestate in whatever way (within the limits of the code access security). A: The main application that my division sells does something very similar to provide client customisations (which means that I can't post any source). We have a C# application that loads dynamic VB.NET scripts (although any .NET language could be easily supported - VB was chosen because the customisation team came from an ASP background). Using .NET's CodeDom we compile the scripts from the database, using the VB CodeDomProvider (annoyingly it defaults to .NET 2, if you want to support 3.5 features you need to pass a dictionary with "CompilerVersion" = "v3.5" to its constructor). Use the CodeDomProvider.CompileAssemblyFromSource method to compile it (you can pass settings to force it to compile in memory only. This would result in hundreds of assemblies in memory, but you could put all the dynamic classes' code together into a single assembly, and recompile the whole lot when any change. This has the advantage that you could add a flag to compile on disk with a PDB for when you're testing, allowing you to debug through the dynamic code. A: The next version of .NET (5.0?) has had a lot of talk about opening the "compiler as a service" which would make things like direct script evaluation possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: GTK implementation of MessageBox I have been trying to implement Win32's MessageBox using GTK. The app uses SDL/OpenGL, so this isn't a GTK app. I handle the initialization (gtk_init) sort of stuff inside the MessageBox function as follows: int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type) { GtkWidget *window = NULL; GtkWidget *dialog = NULL; gtk_init(&gtkArgc, &gtkArgv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); // gcallback calls gtk_main_quit() gtk_init_add((GtkFunction)gcallback, NULL); if (type & MB_YESNO) { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text); } else { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text); } gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog)); gtk_main(); gtk_widget_destroy(dialog); if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; break; case GTK_RESPONSE_YES: return IDYES; break; } } return IDOK; } Now, I am by no means an experienced GTK programmer, and I realize that I'm probably doing something horribly wrong. However, my problem is that the last dialog popped up with this function staying around until the process exits. Any ideas? A: To manage a dialog box with GTK+, use a GtkDialog and gtk_dialog_run() instead of managing a window and a main loop by yourself. EDIT / ADDENDUM : What I mean is "just use" : I don't understand why you create a windows you never use and a main loop which seems useless (at least from the piece of code you posted). You can write something as short as : int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type) { GtkWidget *dialog ; /* Instead of 0, use GTK_DIALOG_MODAL to get a modal dialog box */ if (type & MB_YESNO) dialog = gtk_message_dialog_new(NULL, 0, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text ); else dialog = gtk_message_dialog_new(NULL, 0, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text ); gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog)); gtk_widget_destroy( GTK_WIDGET(dialog) ); if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; case GTK_RESPONSE_YES: return IDYES; } return IDOK; } } A: A few things: You are creating (and not using) an unnecessary toplevel window, named window. You can just delete these lines: window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); Also, the flow doesn't seem quite right. gtk_main() starts the GTK main loop, which blocks until something exits it. gtk_dialog_run() also starts a main loop, but it exits as soon as one of the buttons is clicked. I think it might be enough for you to remove the gtk_init_add() and gtk_main() calls, and simply deal with the return value. Also the gtk_widget_destroy() call is unnecessary, as the dialog window is automatically destroyed when gtk_dialog_run() returns. A: Hmm, ok. I'd suggest code like this, then: typedef struct { int type; int result; } DialogData; static gboolean display_dialog(gpointer user_data) { DialogData *dialog_data = user_data; GtkWidget *dialog; if (dialog_data->type & MB_YESNO) dialog = gtk_message_dialog_new(...); else dialog = gtk_message_dialog_new(...); // Set title, etc. dialog_data->result = gtk_dialog_run(...); gtk_main_quit(); // Quits the main loop run in MessageBox() return FALSE; } int MessageBox(...) { DialogData dialog_data; dialog_data.type = type; gtk_idle_add(display_dialog, &dialog_data); gtk_main(); // Do stuff based on dialog_data.result } The struct is required because you need to pass around a couple pieces of data. The gtk_idle_add() call adds a method to be run when the main loop is running and idle, and the FALSE return value from the display_dialog() call means that it's only run once. After we get the result from the dialog, we quit the main loop. That'll cause the gtk_main() in your main MessageBox() method to return, and you'll be able to access the result from there.
{ "language": "en", "url": "https://stackoverflow.com/questions/263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: BerkeleyDB Concurrency * *What's the optimal level of concurrency that the C++ implementation of BerkeleyDB can reasonably support? *How many threads can I have hammering away at the DB before throughput starts to suffer because of resource contention? I've read the manual and know how to set the number of locks, lockers, database page size, etc. but I'd just like some advice from someone who has real-world experience with BDB concurrency. My application is pretty simple, I'll be doing gets and puts of records that are about 1KB each. No cursors, no deleting. A: I strongly agree with Daan's point: create a test program, and make sure the way in which it accesses data mimics as closely as possible the patterns you expect your application to have. This is extremely important with BDB because different access patterns yield very different throughput. Other than that, these are general factors I found to be of major impact on throughput: * *Access method (which in your case i guess is BTREE). *Level of persistency with which you configured DBD (for example, in my case the 'DB_TXN_WRITE_NOSYNC' environment flag improved write performance by an order of magnitude, but it compromises persistency) *Does the working set fit in cache? *Number of Reads Vs. Writes. *How spread out your access is (remember that BTREE has a page level locking - so accessing different pages with different threads is a big advantage). *Access pattern - meanig how likely are threads to lock one another, or even deadlock, and what is your deadlock resolution policy (this one may be a killer). *Hardware (disk & memory for cache). This amounts to the following point: Scaling a solution based on DBD so that it offers greater concurrency has two key ways of going about it; either minimize the number of locks in your design or add more hardware. A: Doesn't this depend on the hardware as well as number of threads and stuff? I would make a simple test and run it with increasing amounts of threads hammering and see what seems best. A: What I did when working against a database of unknown performance was to measure turnaround time on my queries. I kept upping the thread count until turn-around time dropped, and dropping the thread count until turn-around time improved (well, it was processes in my environment, but whatever). There were moving averages and all sorts of metrics involved, but the take-away lesson was: just adapt to how things are working at the moment. You never know when the DBAs will improve performance or hardware will be upgraded, or perhaps another process will come along to load down the system while you're running. So adapt. Oh, and another thing: avoid process switches if you can - batch things up. Oh, I should make this clear: this all happened at run time, not during development. A: The way I understand things, Samba created tdb to allow "multiple concurrent writers" for any particular database file. So if your workload has multiple writers your performance may be bad (as in, the Samba project chose to write its own system, apparently because it wasn't happy with Berkeley DB's performance in this case). On the other hand, if your workload has lots of readers, then the question is how well your operating system handles multiple readers. A: It depends on what kind of application you are building. Create a representative test scenario, and start hammering away. Then you will know the definitive answer. Besides your use case, it also depends on CPU, memory, front-side bus, operating system, cache settings, etcetera. Seriously, just test your own scenario. If you need some numbers (that actually may mean nothing in your scenario): * *Oracle Berkeley DB: Performance Metrics and Benchmarks *Performance Metrics & Benchmarks: Berkeley DB
{ "language": "en", "url": "https://stackoverflow.com/questions/264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Best Practice: Collaborative Environment, Bin Directory, SVN What are the best practices for checking in BIN directories in a collaborative development environment using SVN? Should project level references be excluded from checkin? Is it easier to just add all bin directories? I develop a lot of DotNetNuke sites and it seems that in a multi-developer environment, it's always a huge task to get the environment setup correctly. The ultimate goal (of course) is to have a new developer checkout the trunk from SVN, restore the DNN database and have it all just 'work'... A: Is this a .Net specific question? Generally the best practice is to not check in anything which is built automatically from files that are already in SCM. All of that is ideally created as part of your automatic build process. If the bin directory you're referring to contains third-party binaries, rather than a build of your project, ignore (downvote?) this advice. A: Tree Surgeon is a great tool which creates an empty .NET development tree. It has been tweaked over years of use and implements lots of best practices. A: Maven helps quite a lot with this problem when I'm coding java. We commit the pom.xml to the scs and the maven repository contains all our dependencies. For me that seems like a nice way to do it. A: Any assemblies that are expected to be in the GAC should stay in the GAC. This includes System.web.dll or any other 3rd party dll that you'll deploy to the GAC in production. This means a new developer would have to install these assemblies. All other 3rd party assemblies should be references through a relative path. My typical structure is: -Project --Project.sln --References ---StructureMap.dll ---NUnit.dll ---System.Web.Mvc.dll --Project.Web ---Project.Web.Proj ---Project.Web.Proj files --Project ---Project.Proj ---Project.Proj files Project.Web and Project reference the assemblies in the root/References folder relatively. These .dlls are checked into subversion. Aside from that, */bin */bin/* obj should be in your global ignore path. With this setup, all references to assemblies are either through the GAC (so should work across all computers), or relative to each project within your solution. A: We follow the practice of using a vendor directory which contains all vendor specific headers and binaries. The goal is that anybody should be able to build the product just by checking it out and running some top level build script.
{ "language": "en", "url": "https://stackoverflow.com/questions/265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: How do you sort a dictionary by value? I often have to sort a dictionary (consisting of keys & values) by value. For example, I have a hash of words and respective frequencies that I want to order by frequency. There is a SortedList which is good for a single value (say frequency), that I want to map back to the word. SortedDictionary orders by key, not value. Some resort to a custom class, but is there a cleaner way? A: On a high level, you have no other choice than to walk through the whole Dictionary and look at each value. Maybe this helps: http://bytes.com/forum/thread563638.html Copy/Pasting from John Timney: Dictionary<string, string> s = new Dictionary<string, string>(); s.Add("1", "a Item"); s.Add("2", "c Item"); s.Add("3", "b Item"); List<KeyValuePair<string, string>> myList = new List<KeyValuePair<string, string>>(s); myList.Sort( delegate(KeyValuePair<string, string> firstPair, KeyValuePair<string, string> nextPair) { return firstPair.Value.CompareTo(nextPair.Value); } ); A: The other answers are good, if all you want is to have a "temporary" list sorted by Value. However, if you want to have a dictionary sorted by Key that automatically synchronizes with another dictionary that is sorted by Value, you could use the Bijection<K1, K2> class. Bijection<K1, K2> allows you to initialize the collection with two existing dictionaries, so if you want one of them to be unsorted, and you want the other one to be sorted, you could create your bijection with code like var dict = new Bijection<Key, Value>(new Dictionary<Key,Value>(), new SortedDictionary<Value,Key>()); You can use dict like any normal dictionary (it implements IDictionary<K, V>), and then call dict.Inverse to get the "inverse" dictionary which is sorted by Value. Bijection<K1, K2> is part of Loyc.Collections.dll, but if you want, you could simply copy the source code into your own project. Note: In case there are multiple keys with the same value, you can't use Bijection, but you could manually synchronize between an ordinary Dictionary<Key,Value> and a BMultiMap<Value,Key>. A: Actually in C#, dictionaries don't have sort() methods. As you are more interested in sort by values, you can't get values until you provide them key. In short, you need to iterate through them using LINQ's OrderBy(), var items = new Dictionary<string, int>(); items.Add("cat", 0); items.Add("dog", 20); items.Add("bear", 100); items.Add("lion", 50); // Call OrderBy() method here on each item and provide them the IDs. foreach (var item in items.OrderBy(k => k.Key)) { Console.WriteLine(item);// items are in sorted order } You can do one trick: var sortedDictByOrder = items.OrderBy(v => v.Value); or: var sortedKeys = from pair in dictName orderby pair.Value ascending select pair; It also depends on what kind of values you are storing: single (like string, int) or multiple (like List, Array, user defined class). If it's single you can make list of it and then apply sort. If it's user defined class, then that class must implement IComparable, ClassName: IComparable<ClassName> and override compareTo(ClassName c) as they are more faster and more object oriented than LINQ. A: Use LINQ: Dictionary<string, int> myDict = new Dictionary<string, int>(); myDict.Add("one", 1); myDict.Add("four", 4); myDict.Add("two", 2); myDict.Add("three", 3); var sortedDict = from entry in myDict orderby entry.Value ascending select entry; This would also allow for great flexibility in that you can select the top 10, 20 10%, etc. Or if you are using your word frequency index for type-ahead, you could also include StartsWith clause as well. A: Use: using System.Linq.Enumerable; ... List<KeyValuePair<string, string>> myList = aDictionary.ToList(); myList.Sort( delegate(KeyValuePair<string, string> pair1, KeyValuePair<string, string> pair2) { return pair1.Value.CompareTo(pair2.Value); } ); Since you're targeting .NET 2.0 or above, you can simplify this into lambda syntax -- it's equivalent, but shorter. If you're targeting .NET 2.0 you can only use this syntax if you're using the compiler from Visual Studio 2008 (or above). var myList = aDictionary.ToList(); myList.Sort((pair1,pair2) => pair1.Value.CompareTo(pair2.Value)); A: You could use: var ordered = dict.OrderBy(x => x.Value).ToDictionary(x => x.Key, x => x.Value); A: Suppose we have a dictionary as: Dictionary<int, int> dict = new Dictionary<int, int>(); dict.Add(21,1041); dict.Add(213, 1021); dict.Add(45, 1081); dict.Add(54, 1091); dict.Add(3425, 1061); dict.Add(768, 1011); You can use temporary dictionary to store values as: Dictionary<int, int> dctTemp = new Dictionary<int, int>(); foreach (KeyValuePair<int, int> pair in dict.OrderBy(key => key.Value)) { dctTemp.Add(pair.Key, pair.Value); } A: Required namespace : using System.Linq; Dictionary<string, int> counts = new Dictionary<string, int>(); counts.Add("one", 1); counts.Add("four", 4); counts.Add("two", 2); counts.Add("three", 3); Order by desc : foreach (KeyValuePair<string, int> kvp in counts.OrderByDescending(key => key.Value)) { // some processing logic for each item if you want. } Order by Asc : foreach (KeyValuePair<string, int> kvp in counts.OrderBy(key => key.Value)) { // some processing logic for each item if you want. } A: You'd never be able to sort a dictionary anyway. They are not actually ordered. The guarantees for a dictionary are that the key and value collections are iterable, and values can be retrieved by index or key, but there is no guarantee of any particular order. Hence you would need to get the name value pair into a list. A: You do not sort entries in the Dictionary. Dictionary class in .NET is implemented as a hashtable - this data structure is not sortable by definition. If you need to be able to iterate over your collection (by key) - you need to use SortedDictionary, which is implemented as a Binary Search Tree. In your case, however the source structure is irrelevant, because it is sorted by a different field. You would still need to sort it by frequency and put it in a new collection sorted by the relevant field (frequency). So in this collection the frequencies are keys and words are values. Since many words can have the same frequency (and you are going to use it as a key) you cannot use neither Dictionary nor SortedDictionary (they require unique keys). This leaves you with a SortedList. I don't understand why you insist on maintaining a link to the original item in your main/first dictionary. If the objects in your collection had a more complex structure (more fields) and you needed to be able to efficiently access/sort them using several different fields as keys - You would probably need a custom data structure that would consist of the main storage that supports O(1) insertion and removal (LinkedList) and several indexing structures - Dictionaries/SortedDictionaries/SortedLists. These indexes would use one of the fields from your complex class as a key and a pointer/reference to the LinkedListNode in the LinkedList as a value. You would need to coordinate insertions and removals to keep your indexes in sync with the main collection (LinkedList) and removals would be pretty expensive I'd think. This is similar to how database indexes work - they are fantastic for lookups but they become a burden when you need to perform many insetions and deletions. All of the above is only justified if you are going to do some look-up heavy processing. If you only need to output them once sorted by frequency then you could just produce a list of (anonymous) tuples: var dict = new SortedDictionary<string, int>(); // ToDo: populate dict var output = dict.OrderBy(e => e.Value).Select(e => new {frequency = e.Value, word = e.Key}).ToList(); foreach (var entry in output) { Console.WriteLine("frequency:{0}, word: {1}",entry.frequency,entry.word); } A: The easiest way to get a sorted Dictionary is to use the built in SortedDictionary class: //Sorts sections according to the key value stored on "sections" unsorted dictionary, which is passed as a constructor argument System.Collections.Generic.SortedDictionary<int, string> sortedSections = null; if (sections != null) { sortedSections = new SortedDictionary<int, string>(sections); } sortedSections will contain the sorted version of sections A: You can sort a Dictionary by value and save it back to itself (so that when you foreach over it the values come out in order): dict = dict.OrderBy(x => x.Value).ToDictionary(x => x.Key, x => x.Value); Sure, it may not be correct, but it works. Hyrum's Law means that this will very likely continue to work. A: Looking around, and using some C# 3.0 features we can do this: foreach (KeyValuePair<string,int> item in keywordCounts.OrderBy(key=> key.Value)) { // do something with item.Key and item.Value } This is the cleanest way I've seen and is similar to the Ruby way of handling hashes. A: You could use: Dictionary<string, string> dic= new Dictionary<string, string>(); var ordered = dic.OrderBy(x => x.Value); return ordered.ToDictionary(t => t.Key, t => t.Value); A: Or for fun you could use some LINQ extension goodness: var dictionary = new Dictionary<string, int> { { "c", 3 }, { "a", 1 }, { "b", 2 } }; dictionary.OrderBy(x => x.Value) .ForEach(x => Console.WriteLine("{0}={1}", x.Key,x.Value)); A: Sorting a SortedDictionary list to bind into a ListView control using VB.NET: Dim MyDictionary As SortedDictionary(Of String, MyDictionaryEntry) MyDictionaryListView.ItemsSource = MyDictionary.Values.OrderByDescending(Function(entry) entry.MyValue) Public Class MyDictionaryEntry ' Need Property for GridViewColumn DisplayMemberBinding Public Property MyString As String Public Property MyValue As Integer End Class XAML: <ListView Name="MyDictionaryListView"> <ListView.View> <GridView> <GridViewColumn DisplayMemberBinding="{Binding Path=MyString}" Header="MyStringColumnName"></GridViewColumn> <GridViewColumn DisplayMemberBinding="{Binding Path=MyValue}" Header="MyValueColumnName"></GridViewColumn> </GridView> </ListView.View> </ListView> A: Sort and print: var items = from pair in players_Dic orderby pair.Value descending select pair; // Display results. foreach (KeyValuePair<string, int> pair in items) { Debug.Log(pair.Key + " - " + pair.Value); } Change descending to acending to change sort order A: A dictionary by definition is an unordered associative structure that contains only values and keys in a hashable way. In other words has not a previsible way to orderer a dictionary. For reference read this article from python language. Link python data structures A: Best way: var list = dict.Values.OrderByDescending(x => x).ToList(); var sortedData = dict.OrderBy(x => list.IndexOf(x.Value)); A: The following code snippet sorts a Dictionary by values. The code first creates a dictionary and then uses OrderBy method to sort the items. public void SortDictionary() { // Create a dictionary with string key and Int16 value pair Dictionary<string, Int16> AuthorList = new Dictionary<string, Int16>(); AuthorList.Add("Mahesh Chand", 35); AuthorList.Add("Mike Gold", 25); AuthorList.Add("Praveen Kumar", 29); AuthorList.Add("Raj Beniwal", 21); AuthorList.Add("Dinesh Beniwal", 84); // Sorted by Value Console.WriteLine("Sorted by Value"); Console.WriteLine("============="); foreach (KeyValuePair<string, Int16> author in AuthorList.OrderBy(key => key.Value)) { Console.WriteLine("Key: {0}, Value: {1}", author.Key, author.Value); } } A: You can sort the Dictionary by value and get the result in dictionary using the code below: Dictionary <<string, string>> ShareUserNewCopy = ShareUserCopy.OrderBy(x => x.Value).ToDictionary(pair => pair.Key, pair => pair.Value); A: Given you have a dictionary you can sort them directly on values using below one liner: var x = (from c in dict orderby c.Value.Order ascending select c).ToDictionary(c => c.Key, c=>c.Value);
{ "language": "en", "url": "https://stackoverflow.com/questions/289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "929" }
Q: Is there a version control system for database structure changes? I often run into the following problem. I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down. So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh. Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great. A: For Oracle, I use Toad, which can dump a schema to a number of discrete files (e.g., one file per table). I have some scripts that manage this collection in Perforce, but I think it should be easily doable in just about any revision control system. A: Have your initial create table statements in version controller, then add alter table statements, but never edit files, just more alter files ideally named sequentially, or even as a "change set", so you can find all the changes for a particular deployment. The hardiest part that I can see, is tracking dependencies, eg, for a particular deployment table B might need to be updated before table A. A: Take a look at the oracle package DBMS_METADATA. In particular, the following methods are particularly useful: * *DBMS_METADATA.GET_DDL *DBMS_METADATA.SET_TRANSFORM_PARAM *DBMS_METADATA.GET_GRANTED_DDL Once you are familiar with how they work (pretty self explanatory) you can write a simple script to dump the results of those methods into text files that can be put under source control. Good luck! Not sure if there is something this simple for MSSQL. A: I write my db release scripts in parallel with coding, and keep the release scripts in a project specific section in SS. If I make a change to the code that requires a db change, then I update the release script at the same time. Prior to release, I run the release script on a clean dev db (copied structure wise from production) and do my final testing on it. A: I've done this off and on for years -- managing (or trying to manage) schema versions. The best approaches depend on the tools you have. If you can get the Quest Software tool "Schema Manager" you'll be in good shape. Oracle has its own, inferior tool that is also called "Schema Manager" (confusing much?) that I don't recommend. Without an automated tool (see other comments here about Data Dude) then you'll be using scripts and DDL files directly. Pick an approach, document it, and follow it rigorously. I like having the ability to re-create the database at any given moment, so I prefer to have a full DDL export of the entire database (if I'm the DBA), or of the developer schema (if I'm in product-development mode). A: PLSQL Developer, a tool from All Arround Automations, has a plugin for repositories that works OK ( but not great) with Visual Source Safe. From the web: The Version Control Plug-In provides a tight integration between the PL/SQL Developer IDE >>and any Version Control System that supports the Microsoft SCC Interface Specification. >>This includes most popular Version Control Systems such as Microsoft Visual SourceSafe, >>Merant PVCS and MKS Source Integrity. http://www.allroundautomations.com/plsvcs.html A: ER Studio allows you to reverse your database schema into the tool and you can then compare it to live databases. Example: Reverse your development schema into ER Studio -- compare it to production and it will list all of the differences. It can script the changes or just push them through automatically. Once you have a schema in ER Studio, you can either save the creation script or save it as a proprietary binary and save it in version control. If you ever want to go back to a past version of the scheme, just check it out and push it to your db platform. A: There's a PHP5 "database migration framework" called Ruckusing. I haven't used it, but the examples show the idea, if you use the language to create the database as and when needed, you only have to track source files. A: In Ruby on Rails, there's a concept of a migration -- a quick script to change the database. You generate a migration file, which has rules to increase the db version (such as adding a column) and rules to downgrade the version (such as removing a column). Each migration is numbered, and a table keeps track of your current db version. To migrate up, you run a command called "db:migrate" which looks at your version and applies the needed scripts. You can migrate down in a similar way. The migration scripts themselves are kept in a version control system -- whenever you change the database you check in a new script, and any developer can apply it to bring their local db to the latest version. A: We've used MS Team System Database Edition with pretty good success. It integrates with TFS version control and Visual Studio more-or-less seamlessly and allows us to manages stored procs, views, etc., easily. Conflict resolution can be a pain, but version history is complete once it's done. Thereafter, migrations to QA and production are extremely simple. It's fair to say that it's a version 1.0 product, though, and is not without a few issues. A: You can use Microsoft SQL Server Data Tools in visual studio to generate scripts for database objects as part of a SQL Server Project. You can then add the scripts to source control using the source control integration that is built into visual studio. Also, SQL Server Projects allow you verify the database objects using a compiler and generate deployment scripts to update an existing database or create a new one. A: I'm a bit old-school, in that I use source files for creating the database. There are actually 2 files - project-database.sql and project-updates.sql - the first for the schema and persistant data, and the second for modifications. Of course, both are under source control. When the database changes, I first update the main schema in project-database.sql, then copy the relevant info to the project-updates.sql, for instance ALTER TABLE statements. I can then apply the updates to the development database, test, iterate until done well. Then, check in files, test again, and apply to production. Also, I usually have a table in the db - Config - such as: SQL CREATE TABLE Config ( cfg_tag VARCHAR(50), cfg_value VARCHAR(100) ); INSERT INTO Config(cfg_tag, cfg_value) VALUES ( 'db_version', '$Revision: $'), ( 'db_revision', '$Revision: $'); Then, I add the following to the update section: UPDATE Config SET cfg_value='$Revision: $' WHERE cfg_tag='db_revision'; The db_version only gets changed when the database is recreated, and the db_revision gives me an indication how far the db is off the baseline. I could keep the updates in their own separate files, but I chose to mash them all together and use cut&paste to extract relevant sections. A bit more housekeeping is in order, i.e., remove ':' from $Revision 1.1 $ to freeze them. A: In the absence of a VCS for table changes I've been logging them in a wiki. At least then I can see when and why it was changed. It's far from perfect as not everyone is doing it and we have multiple product versions in use, but better than nothing. A: I'd recommend one of two approaches. First, invest in PowerDesigner from Sybase. Enterprise Edition. It allows you to design Physical datamodels, and a whole lot more. But it comes with a repository that allows you to check in your models. Each new check in can be a new version, it can compare any version to any other version and even to what is in your database at that time. It will then present a list of every difference and ask which should be migrated… and then it builds the script to do it. It’s not cheap but it’s a bargain at twice the price and it’s ROI is about 6 months. The other idea is to turn on DDL auditing (works in Oracle). This will create a table with every change you make. If you query the changes from the timestamp you last moved your database changes to prod to right now, you’ll have an ordered list of everything you’ve done. A few where clauses to eliminate zero-sum changes like create table foo; followed by drop table foo; and you can EASILY build a mod script. Why keep the changes in a wiki, that’s double the work. Let the database track them for you. A: Schema Compare for Oracle is a tool specifically designed to migrate changes from our Oracle database to another. Please visit the URL below for the download link, where you will be able to use the software for a fully functional trial. http://www.red-gate.com/Products/schema_compare_for_oracle/index.htm A: Two book recommendations: "Refactoring Databases" by Ambler and Sadalage and "Agile Database Techniques" by Ambler. Someone mentioned Rails Migrations. I think they work great, even outside of Rails applications. I used them on an ASP application with SQL Server which we were in the process of moving to Rails. You check the migration scripts themselves into the VCS. Here's a post by Pragmatic Dave Thomas on the subject. A: MyBatis (formerly iBatis) has a schema migration, tool for use on the command line. It is written in java though can be used with any project. To achieve a good database change management practice, we need to identify a few key goals. Thus, the MyBatis Schema Migration System (or MyBatis Migrations for short) seeks to: * *Work with any database, new or existing *Leverage the source control system (e.g. Subversion) *Enable concurrent developers or teams to work independently *Allow conflicts very visible and easily manageable *Allow for forward and backward migration (evolve, devolve respectively) *Make the current status of the database easily accessible and comprehensible *Enable migrations despite access privileges or bureaucracy *Work with any methodology *Encourages good, consistent practices A: Redgate has a product called SQL Source Control. It integrates with TFS, SVN, SourceGear Vault, Vault Pro, Mercurial, Perforce, and Git. A: I highly recommend SQL delta. I just use it to generate the diff scripts when i'm done coding my feature and check those scripts into my source control tool (Mercurial :)) They have both an SQL server & Oracle version. A: I wonder that no one mentioned the open source tool liquibase which is Java based and should work for nearly every database which supports jdbc. Compared to rails it uses xml instead ruby to perform the schema changes. Although I dislike xml for domain specific languages the very cool advantage of xml is that liquibase knows how to roll back certain operations like <createTable tableName="USER"> <column name="firstname" type="varchar(255)"/> </createTable> So you don't need to handle this of your own Pure sql statements or data imports are also supported. A: Most database engines should support dumping your database into a file. I know MySQL does, anyway. This will just be a text file, so you could submit that to Subversion, or whatever you use. It'd be easy to run a diff on the files too. A: If you're using SQL Server it would be hard to beat Data Dude (aka the Database Edition of Visual Studio). Once you get the hang of it, doing a schema compare between your source controlled version of the database and the version in production is a breeze. And with a click you can generate your diff DDL. There's an instructional video on MSDN that's very helpful. I know about DBMS_METADATA and Toad, but if someone could come up with a Data Dude for Oracle then life would be really sweet.
{ "language": "en", "url": "https://stackoverflow.com/questions/308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "130" }
Q: PHP Session Security What are some guidelines for maintaining responsible session security with PHP? There's information all over the web and it's about time it all landed in one place! A: This session fixation paper has very good pointers where attack may come. See also session fixation page at Wikipedia. A: There are a couple of things to do in order to keep your session secure: * *Use SSL when authenticating users or performing sensitive operations. *Regenerate the session id whenever the security level changes (such as logging in). You can even regenerate the session id every request if you wish. *Have sessions time out *Don't use register globals *Store authentication details on the server. That is, don't send details such as username in the cookie. *Check the $_SERVER['HTTP_USER_AGENT']. This adds a small barrier to session hijacking. You can also check the IP address. But this causes problems for users that have changing IP address due to load balancing on multiple internet connections etc (which is the case in our environment here). *Lock down access to the sessions on the file system or use custom session handling *For sensitive operations consider requiring logged in users to provide their authenication details again A: Using IP address isn't really the best idea in my experience. For example; my office has two IP addresses that get used depending on load and we constantly run into issues using IP addresses. Instead, I've opted for storing the sessions in a separate database for the domains on my servers. This way no one on the file system has access to that session info. This was really helpful with phpBB before 3.0 (they've since fixed this) but it's still a good idea I think. A: This is pretty trivial and obvious, but be sure to session_destroy after every use. This can be difficult to implement if the user does not log out explicitly, so a timer can be set to do this. Here is a good tutorial on setTimer() and clearTimer(). A: The main problem with PHP sessions and security (besides session hijacking) comes with what environment you are in. By default PHP stores the session data in a file in the OS's temp directory. Without any special thought or planning this is a world readable directory so all of your session information is public to anyone with access to the server. As for maintaining sessions over multiple servers. At that point it would be better to switch PHP to user handled sessions where it calls your provided functions to CRUD (create, read, update, delete) the session data. At that point you could store the session information in a database or memcache like solution so that all application servers have access to the data. Storing your own sessions may also be advantageous if you are on a shared server because it will let you store it in the database which you often times have more control over then the filesystem. A: I set my sessions up like this- on the log in page: $_SESSION['fingerprint'] = md5($_SERVER['HTTP_USER_AGENT'] . PHRASE . $_SERVER['REMOTE_ADDR']); (phrase defined on a config page) then on the header that is throughout the rest of the site: session_start(); if ($_SESSION['fingerprint'] != md5($_SERVER['HTTP_USER_AGENT'] . PHRASE . $_SERVER['REMOTE_ADDR'])) { session_destroy(); header('Location: http://website login page/'); exit(); } A: php.ini session.cookie_httponly = 1 change session name from default PHPSESSID eq Apache add header: X-XSS-Protection 1 A: I would check both IP and User Agent to see if they change if ($_SESSION['user_agent'] != $_SERVER['HTTP_USER_AGENT'] || $_SESSION['user_ip'] != $_SERVER['REMOTE_ADDR']) { //Something fishy is going on here? } A: If you you use session_set_save_handler() you can set your own session handler. For example you could store your sessions in the database. Refer to the php.net comments for examples of a database session handler. DB sessions are also good if you have multiple servers otherwise if you are using file based sessions you would need to make sure that each webserver had access to the same filesystem to read/write the sessions. A: You need to be sure the session data are safe. By looking at your php.ini or using phpinfo() you can find you session settings. _session.save_path_ tells you where they are saved. Check the permission of the folder and of its parents. It shouldn't be public (/tmp) or be accessible by other websites on your shared server. Assuming you still want to use php session, You can set php to use an other folder by changing _session.save_path_ or save the data in the database by changing _session.save_handler_ . You might be able to set _session.save_path_ in your php.ini (some providers allow it) or for apache + mod_php, in a .htaccess file in your site root folder: php_value session.save_path "/home/example.com/html/session". You can also set it at run time with _session_save_path()_ . Check Chris Shiflett's tutorial or Zend_Session_SaveHandler_DbTable to set and alternative session handler. A: One guideline is to call session_regenerate_id every time a session's security level changes. This helps prevent session hijacking. A: I think one of the major problems (which is being addressed in PHP 6) is register_globals. Right now one of the standard methods used to avoid register_globals is to use the $_REQUEST, $_GET or $_POST arrays. The "correct" way to do it (as of 5.2, although it's a little buggy there, but stable as of 6, which is coming soon) is through filters. So instead of: $username = $_POST["username"]; you would do: $username = filter_input(INPUT_POST, 'username', FILTER_SANITIZE_STRING); or even just: $username = filter_input(INPUT_POST, 'username'); A: My two (or more) cents: * *Trust no one *Filter input, escape output (cookie, session data are your input too) *Avoid XSS (keep your HTML well formed, take a look at PHPTAL or HTMLPurifier) *Defense in depth *Do not expose data There is a tiny but good book on this topic: Essential PHP Security by Chris Shiflett. Essential PHP Security http://shiflett.org/images/essential-php-security-small.png On the home page of the book you will find some interesting code examples and sample chapters. You may use technique mentioned above (IP & UserAgent), described here: How to avoid identity theft
{ "language": "en", "url": "https://stackoverflow.com/questions/328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "125" }
Q: Should I use nested classes in this case? I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes which do the video decoding and video encoding. I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine. The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use, and to avoid any possible naming conflicts? I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue. A: You would use a nested class to create a (small) helper class that's required to implement the main class. Or for example, to define an interface (a class with abstract methods). In this case, the main disadvantage of nested classes is that this makes it harder to re-use them. Perhaps you'd like to use your VideoDecoder class in another project. If you make it a nested class of VideoPlayer, you can't do this in an elegant way. Instead, put the other classes in separate .h/.cpp files, which you can then use in your VideoPlayer class. The client of VideoPlayer now only needs to include the file that declares VideoPlayer, and still doesn't need to know about how you implemented it. A: One way of deciding whether or not to use nested classes is to think whether or not this class plays a supporting role or it's own part. If it exists solely for the purpose of helping another class then I generally make it a nested class. There are a whole load of caveats to that, some of which seem contradictory but it all comes down to experience and gut-feeling. A: sounds like a case where you could use the strategy pattern A: Sometimes it's appropriate to hide the implementation classes from the user -- in these cases it's better to put them in an foo_internal.h than inside the public class definition. That way, readers of your foo.h will not see what you'd prefer they not be troubled with, but you can still write tests against each of the concrete implementations of your interface. A: We hit an issue with a semi-old Sun C++ compiler and visibility of nested classes which behavior changed in the standard. This is not a reason to not do your nested class, of course, just something to be aware of if you plan on compiling your software on lots of platforms including old compilers. A: Well, if you use pointers to your workhorse classes in your Interface class and don't expose them as parameters or return types in your interface methods, you will not need to include the definitions for those work horses in your interface header file (you just forward declare them instead). That way, users of your interface will not need to know about the classes in the background. You definitely don't need to nest classes for this. In fact, separate class files will actually make your code a lot more readable and easier to manage as your project grows. it will also help you later on if you need to subclass (say for different content/codec types). Here's more information on the PIMPL pattern (section 3.1.1). A: I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure. My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem. I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation. A: You should use an inner class only when you cannot implement it as a separate class using the would-be outer class' public interface. Inner classes increase the size, complexity, and responsibility of a class so they should be used sparingly. Your encoder/decoder class sounds like it better fits the Strategy Pattern A: One reason to avoid nested classes is if you ever intend to wrap the code with swig (http://www.swig.org) for use with other languages. Swig currently has problems with nested classes, so interfacing with libraries that expose any nested classes becomes a real pain. A: Another thing to keep in mind is whether you ever envision different implementations of your work functions (such as decoding and encoding). In that case, you would definitely want an abstract base class with different concrete classes which implement the functions. It would not really be appropriate to nest a separate subclass for each type of implementation.
{ "language": "en", "url": "https://stackoverflow.com/questions/330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: When to use unsigned values over signed ones? When is it appropriate to use an unsigned variable over a signed one? What about in a for loop? I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus. for (unsigned int i = 0; i < someThing.length(); i++) { SomeThing var = someThing.at(i); // You get the idea. } I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems' part. A: I would think that if your business case dictates that a negative number is invalid, you would want to have an error shown or thrown. With that in mind, I only just recently found out about unsigned integers while working on a project processing data in a binary file and storing the data into a database. I was purposely "corrupting" the binary data, and ended up getting negative values instead of an expected error. I found that even though the value converted, the value was not valid for my business case. My program did not error, and I ended up getting wrong data into the database. It would have been better if I had used uint and had the program fail. A: C and C++ compilers will generate a warning when you compare signed and unsigned types; in your example code, you couldn't make your loop variable unsigned and have the compiler generate code without warnings (assuming said warnings were turned on). Naturally, you're compiling with warnings turned all the way up, right? And, have you considered compiling with "treat warnings as errors" to take it that one step further? The downside with using signed numbers is that there's a temptation to overload them so that, for example, the values 0->n are the menu selection, and -1 means nothing's selected - rather than creating a class that has two variables, one to indicate if something is selected and another to store what that selection is. Before you know it, you're testing for negative one all over the place and the compiler is complaining about how you're wanting to compare the menu selection against the number of menu selections you have - but that's dangerous because they're different types. So don't do that. A: I was glad to find a good conversation on this subject, as I hadn't really given it much thought before. In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case). unsigned starts to make more sense when: * *You're going to do bitwise things like masks, or *You're desperate to to take advantage of the sign bit for that extra positive range . Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against). A: size_t is often a good choice for this, or size_type if you're using an STL class. A: In your example above, when 'i' will always be positive and a higher range would be beneficial, unsigned would be useful. Like if you're using 'declare' statements, such as: #declare BIT1 (unsigned int 1) #declare BIT32 (unsigned int reallybignumber) Especially when these values will never change. However, if you're doing an accounting program where the people are irresponsible with their money and are constantly in the red, you will most definitely want to use 'signed'. I do agree with saint though that a good rule of thumb is to use signed, which C actually defaults to, so you're covered.
{ "language": "en", "url": "https://stackoverflow.com/questions/336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: XML Processing in Python I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. A: There are 3 major ways of dealing with XML, in general: dom, sax, and xpath. The dom model is good if you can afford to load your entire xml file into memory at once, and you don't mind dealing with data structures, and you are looking at much/most of the model. The sax model is great if you only care about a few tags, and/or you are dealing with big files and can process them sequentially. The xpath model is a little bit of each -- you can pick and choose paths to the data elements you need, but it requires more libraries to use. If you want straightforward and packaged with Python, minidom is your answer, but it's pretty lame, and the documentation is "here's docs on dom, go figure it out". It's really annoying. Personally, I like cElementTree, which is a faster (c-based) implementation of ElementTree, which is a dom-like model. I've used sax systems, and in many ways they're more "pythonic" in their feel, but I usually end up creating state-based systems to handle them, and that way lies madness (and bugs). I say go with minidom if you like research, or ElementTree if you want good code that works well. A: I've used ElementTree for several projects and recommend it. It's pythonic, comes 'in the box' with Python 2.5, including the c version cElementTree (xml.etree.cElementTree) which is 20 times faster than the pure Python version, and is very easy to use. lxml has some perfomance advantages, but they are uneven and you should check the benchmarks first for your use case. As I understand it, ElementTree code can easily be ported to lxml. A: It depends a bit on how complicated the document needs to be. I've used minidom a lot for writing XML, but that's usually been just reading documents, making some simple transformations, and writing them back out. That worked well enough until I needed the ability to order element attributes (to satisfy an ancient application that doesn't parse XML properly). At that point I gave up and wrote the XML myself. If you're only working on simple documents, then doing it yourself can be quicker and simpler than learning a framework. If you can conceivably write the XML by hand, then you can probably code it by hand as well (just remember to properly escape special characters, and use str.encode(codec, errors="xmlcharrefreplace")). Apart from these snafus, XML is regular enough that you don't need a special library to write it. If the document is too complicated to write by hand, then you should probably look into one of the frameworks already mentioned. At no point should you need to write a general XML writer. A: You can also try untangle to parse simple XML documents. A: Since you mentioned that you'll be building "fairly simple" XML, the minidom module (part of the Python Standard Library) will likely suit your needs. If you have any experience with the DOM representation of XML, you should find the API quite straight forward. A: I write a SOAP server that receives XML requests and creates XML responses. (Unfortunately, it's not my project, so it's closed source, but that's another problem). It turned out for me that creating (SOAP) XML documents is fairly simple if you have a data structure that "fits" the schema. I keep the envelope since the response envelope is (almost) the same as the request envelope. Then, since my data structure is a (possibly nested) dictionary, I create a string that turns this dictionary into <key>value</key> items. This is a task that recursion makes simple, and I end up with the right structure. This is all done in python code and is currently fast enough for production use. You can also (relatively) easily build lists as well, although depending upon your client, you may hit problems unless you give length hints. For me, this was much simpler, since a dictionary is a much easier way of working than some custom class. For the books, generating XML is much easier than parsing! A: For serious work with XML in Python use lxml Python comes with ElementTree built-in library, but lxml extends it in terms of speed and functionality (schema validation, sax parsing, XPath, various sorts of iterators and many other features). You have to install it, but in many places, it is already assumed to be part of standard equipment (e.g. Google AppEngine does not allow C-based Python packages, but makes an exception for lxml, pyyaml, and few others). Building XML documents with E-factory (from lxml) Your question is about building XML document. With lxml there are many methods and it took me a while to find the one, which seems to be easy to use and also easy to read. Sample code from lxml doc on using E-factory (slightly simplified): The E-factory provides a simple and compact syntax for generating XML and HTML: >>> from lxml.builder import E >>> html = page = ( ... E.html( # create an Element called "html" ... E.head( ... E.title("This is a sample document") ... ), ... E.body( ... E.h1("Hello!"), ... E.p("This is a paragraph with ", E.b("bold"), " text in it!"), ... E.p("This is another paragraph, with a", "\n ", ... E.a("link", href="http://www.python.org"), "."), ... E.p("Here are some reserved characters: <spam&egg>."), ... ) ... ) ... ) >>> print(etree.tostring(page, pretty_print=True)) <html> <head> <title>This is a sample document</title> </head> <body> <h1>Hello!</h1> <p>This is a paragraph with <b>bold</b> text in it!</p> <p>This is another paragraph, with a <a href="http://www.python.org">link</a>.</p> <p>Here are some reserved characters: &lt;spam&amp;egg&gt;.</p> </body> </html> I appreciate on E-factory it following things Code reads almost as the resulting XML document Readability counts. Allows creation of any XML content Supports stuff like: * *use of namespaces *starting and ending text nodes within one element *functions formatting attribute content (see func CLASS in full lxml sample) Allows very readable constructs with lists e.g.: from lxml import etree from lxml.builder import E lst = ["alfa", "beta", "gama"] xml = E.root(*[E.record(itm) for itm in lst]) etree.tostring(xml, pretty_print=True) resulting in: <root> <record>alfa</record> <record>beta</record> <record>gama</record> </root> Conclusions I highly recommend reading lxml tutorial - it is very well written and will give you many more reasons to use this powerful library. The only disadvantage of lxml is, that it must be compiled. See SO answer for more tips how to install lxml from wheel format package within a fraction of a second. A: Personally, I've played with several of the built-in options on an XML-heavy project and have settled on pulldom as the best choice for less complex documents. Especially for small simple stuff, I like the event-driven theory of parsing rather than setting up a whole slew of callbacks for a relatively simple structure. Here is a good quick discussion of how to use the API. What I like: you can handle the parsing in a for loop rather than using callbacks. You also delay full parsing (the "pull" part) and only get additional detail when you call expandNode(). This satisfies my general requirement for "responsible" efficiency without sacrificing ease of use and simplicity. A: ElementTree has a nice pythony API. I think it's even shipped as part of python 2.5 It's in pure python and as I say, pretty nice, but if you wind up needing more performance, then lxml exposes the same API and uses libxml2 under the hood. You can theoretically just swap it in when you discover you need it. A: I strongly recommend SAX - Simple API for XML - implementation in the Python libraries. They are fairly easy to setup and process large XML by even driven API, as discussed by previous posters here, and have low memory footprint unlike validating DOM style XML parsers. A: I assume that the .NET way of processing XML builds on some version of MSXML and in that case I assume that using, for example, minidom would make you feel somewhat at home. However, if it is simple processing you are doing, any library will probably do. I also prefer working with ElementTree when dealing with XML in Python because it is a very neat library. A: If you're going to be building SOAP messages, check out soaplib. It uses ElementTree under the hood, but it provides a much cleaner interface for serializing and deserializing messages.
{ "language": "en", "url": "https://stackoverflow.com/questions/337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: Generate list of all possible permutations of a string How would I go about generating a list of all possible permutations of a string between x and y characters in length, containing a variable list of characters. Any language would work, but it should be portable. A: I just whipped this up quick in Ruby: def perms(x, y, possible_characters) all = [""] current_array = all.clone 1.upto(y) { |iteration| next_array = [] current_array.each { |string| possible_characters.each { |c| value = string + c next_array.insert next_array.length, value all.insert all.length, value } } current_array = next_array } all.delete_if { |string| string.length < x } end You might look into language API for built in permutation type functions, and you might be able to write more optimized code, but if the numbers are all that high, I'm not sure there is much of a way around having a lot of results. Anyways, the idea behind the code is start with string of length 0, then keep track of all the strings of length Z where Z is the current size in the iteration. Then, go through each string and append each character onto each string. Finally at the end, remove any that were below the x threshold and return the result. I didn't test it with potentially meaningless input (null character list, weird values of x and y, etc). A: This is a translation of Mike's Ruby version, into Common Lisp: (defun perms (x y original-string) (loop with all = (list "") with current-array = (list "") for iteration from 1 to y do (loop with next-array = nil for string in current-array do (loop for c across original-string for value = (concatenate 'string string (string c)) do (push value next-array) (push value all)) (setf current-array (reverse next-array))) finally (return (nreverse (delete-if #'(lambda (el) (< (length el) x)) all))))) And another version, slightly shorter and using more loop facility features: (defun perms (x y original-string) (loop repeat y collect (loop for string in (or (car (last sets)) (list "")) append (loop for c across original-string collect (concatenate 'string string (string c)))) into sets finally (return (loop for set in sets append (loop for el in set when (>= (length el) x) collect el))))) A: Recursive solution in C++ int main (int argc, char * const argv[]) { string s = "sarp"; bool used [4]; permute(0, "", used, s); } void permute(int level, string permuted, bool used [], string &original) { int length = original.length(); if(level == length) { // permutation complete, display cout << permuted << endl; } else { for(int i=0; i<length; i++) { // try to add an unused character if(!used[i]) { used[i] = true; permute(level+1, original[i] + permuted, used, original); // find the permutations starting with this string used[i] = false; } } } A: Here is a simple word C# recursive solution: Method: public ArrayList CalculateWordPermutations(string[] letters, ArrayList words, int index) { bool finished = true; ArrayList newWords = new ArrayList(); if (words.Count == 0) { foreach (string letter in letters) { words.Add(letter); } } for(int j=index; j<words.Count; j++) { string word = (string)words[j]; for(int i =0; i<letters.Length; i++) { if(!word.Contains(letters[i])) { finished = false; string newWord = (string)word.Clone(); newWord += letters[i]; newWords.Add(newWord); } } } foreach (string newWord in newWords) { words.Add(newWord); } if(finished == false) { CalculateWordPermutations(letters, words, words.Count - newWords.Count); } return words; } Calling: string[] letters = new string[]{"a","b","c"}; ArrayList words = CalculateWordPermutations(letters, new ArrayList(), 0); A: ... and here is the C version: void permute(const char *s, char *out, int *used, int len, int lev) { if (len == lev) { out[lev] = '\0'; puts(out); return; } int i; for (i = 0; i < len; ++i) { if (! used[i]) continue; used[i] = 1; out[lev] = s[i]; permute(s, out, used, len, lev + 1); used[i] = 0; } return; } A: permute (ABC) -> A.perm(BC) -> A.perm[B.perm(C)] -> A.perm[(*BC), (CB*)] -> [(*ABC), (BAC), (BCA*), (*ACB), (CAB), (CBA*)] To remove duplicates when inserting each alphabet check to see if previous string ends with the same alphabet (why? -exercise) public static void main(String[] args) { for (String str : permStr("ABBB")){ System.out.println(str); } } static Vector<String> permStr(String str){ if (str.length() == 1){ Vector<String> ret = new Vector<String>(); ret.add(str); return ret; } char start = str.charAt(0); Vector<String> endStrs = permStr(str.substring(1)); Vector<String> newEndStrs = new Vector<String>(); for (String endStr : endStrs){ for (int j = 0; j <= endStr.length(); j++){ if (endStr.substring(0, j).endsWith(String.valueOf(start))) break; newEndStrs.add(endStr.substring(0, j) + String.valueOf(start) + endStr.substring(j)); } } return newEndStrs; } Prints all permutations sans duplicates A: In Perl, if you want to restrict yourself to the lowercase alphabet, you can do this: my @result = ("a" .. "zzzz"); This gives all possible strings between 1 and 4 characters using lowercase characters. For uppercase, change "a" to "A" and "zzzz" to "ZZZZ". For mixed-case it gets much harder, and probably not doable with one of Perl's builtin operators like that. A: Ruby answer that works: class String def each_char_with_index 0.upto(size - 1) do |index| yield(self[index..index], index) end end def remove_char_at(index) return self[1..-1] if index == 0 self[0..(index-1)] + self[(index+1)..-1] end end def permute(str, prefix = '') if str.size == 0 puts prefix return end str.each_char_with_index do |char, index| permute(str.remove_char_at(index), prefix + char) end end # example # permute("abc") A: The following Java recursion prints all permutations of a given string: //call it as permut("",str); public void permut(String str1,String str2){ if(str2.length() != 0){ char ch = str2.charAt(0); for(int i = 0; i <= str1.length();i++) permut(str1.substring(0,i) + ch + str1.substring(i,str1.length()), str2.substring(1,str2.length())); }else{ System.out.println(str1); } } Following is the updated version of above "permut" method which makes n! (n factorial) less recursive calls compared to the above method //call it as permut("",str); public void permut(String str1,String str2){ if(str2.length() > 1){ char ch = str2.charAt(0); for(int i = 0; i <= str1.length();i++) permut(str1.substring(0,i) + ch + str1.substring(i,str1.length()), str2.substring(1,str2.length())); }else{ char ch = str2.charAt(0); for(int i = 0; i <= str1.length();i++) System.out.println(str1.substring(0,i) + ch + str1.substring(i,str1.length()), str2.substring(1,str2.length())); } } A: There are several ways to do this. Common methods use recursion, memoization, or dynamic programming. The basic idea is that you produce a list of all strings of length 1, then in each iteration, for all strings produced in the last iteration, add that string concatenated with each character in the string individually. (the variable index in the code below keeps track of the start of the last and the next iteration) Some pseudocode: list = originalString.split('') index = (0,0) list = [""] for iteration n in 1 to y: index = (index[1], len(list)) for string s in list.subset(index[0] to end): for character c in originalString: list.add(s + c) you'd then need to remove all strings less than x in length, they'll be the first (x-1) * len(originalString) entries in the list. A: import java.util.*; public class all_subsets { public static void main(String[] args) { String a = "abcd"; for(String s: all_perm(a)) { System.out.println(s); } } public static Set<String> concat(String c, Set<String> lst) { HashSet<String> ret_set = new HashSet<String>(); for(String s: lst) { ret_set.add(c+s); } return ret_set; } public static HashSet<String> all_perm(String a) { HashSet<String> set = new HashSet<String>(); if(a.length() == 1) { set.add(a); } else { for(int i=0; i<a.length(); i++) { set.addAll(concat(a.charAt(i)+"", all_perm(a.substring(0, i)+a.substring(i+1, a.length())))); } } return set; } } A: I'm not sure why you would want to do this in the first place. The resulting set for any moderately large values of x and y will be huge, and will grow exponentially as x and/or y get bigger. Lets say your set of possible characters is the 26 lowercase letters of the alphabet, and you ask your application to generate all permutations where length = 5. Assuming you don't run out of memory you'll get 11,881,376 (i.e. 26 to the power of 5) strings back. Bump that length up to 6, and you'll get 308,915,776 strings back. These numbers get painfully large, very quickly. Here's a solution I put together in Java. You'll need to provide two runtime arguments (corresponding to x and y). Have fun. public class GeneratePermutations { public static void main(String[] args) { int lower = Integer.parseInt(args[0]); int upper = Integer.parseInt(args[1]); if (upper < lower || upper == 0 || lower == 0) { System.exit(0); } for (int length = lower; length <= upper; length++) { generate(length, ""); } } private static void generate(int length, String partial) { if (length <= 0) { System.out.println(partial); } else { for (char c = 'a'; c <= 'z'; c++) { generate(length - 1, partial + c); } } } } A: Here's a non-recursive version I came up with, in javascript. It's not based on Knuth's non-recursive one above, although it has some similarities in element swapping. I've verified its correctness for input arrays of up to 8 elements. A quick optimization would be pre-flighting the out array and avoiding push(). The basic idea is: * *Given a single source array, generate a first new set of arrays which swap the first element with each subsequent element in turn, each time leaving the other elements unperturbed. eg: given 1234, generate 1234, 2134, 3214, 4231. *Use each array from the previous pass as the seed for a new pass, but instead of swapping the first element, swap the second element with each subsequent element. Also, this time, don't include the original array in the output. *Repeat step 2 until done. Here is the code sample: function oxe_perm(src, depth, index) { var perm = src.slice(); // duplicates src. perm = perm.split(""); perm[depth] = src[index]; perm[index] = src[depth]; perm = perm.join(""); return perm; } function oxe_permutations(src) { out = new Array(); out.push(src); for (depth = 0; depth < src.length; depth++) { var numInPreviousPass = out.length; for (var m = 0; m < numInPreviousPass; ++m) { for (var n = depth + 1; n < src.length; ++n) { out.push(oxe_perm(out[m], depth, n)); } } } return out; } A: It's better to use backtracking #include <stdio.h> #include <string.h> void swap(char *a, char *b) { char temp; temp = *a; *a = *b; *b = temp; } void print(char *a, int i, int n) { int j; if(i == n) { printf("%s\n", a); } else { for(j = i; j <= n; j++) { swap(a + i, a + j); print(a, i + 1, n); swap(a + i, a + j); } } } int main(void) { char a[100]; gets(a); print(a, 0, strlen(a) - 1); return 0; } A: In ruby: str = "a" 100_000_000.times {puts str.next!} It is quite fast, but it is going to take some time =). Of course, you can start at "aaaaaaaa" if the short strings aren't interesting to you. I might have misinterpreted the actual question though - in one of the posts it sounded as if you just needed a bruteforce library of strings, but in the main question it sounds like you need to permutate a particular string. Your problem is somewhat similar to this one: http://beust.com/weblog/archives/000491.html (list all integers in which none of the digits repeat themselves, which resulted in a whole lot of languages solving it, with the ocaml guy using permutations, and some java guy using yet another solution). A: I needed this today, and although the answers already given pointed me in the right direction, they weren't quite what I wanted. Here's an implementation using Heap's method. The length of the array must be at least 3 and for practical considerations not be bigger than 10 or so, depending on what you want to do, patience and clock speed. Before you enter your loop, initialise Perm(1 To N) with the first permutation, Stack(3 To N) with zeroes*, and Level with 2**. At the end of the loop call NextPerm, which will return false when we're done. * VB will do that for you. ** You can change NextPerm a little to make this unnecessary, but it's clearer like this. Option Explicit Function NextPerm(Perm() As Long, Stack() As Long, Level As Long) As Boolean Dim N As Long If Level = 2 Then Swap Perm(1), Perm(2) Level = 3 Else While Stack(Level) = Level - 1 Stack(Level) = 0 If Level = UBound(Stack) Then Exit Function Level = Level + 1 Wend Stack(Level) = Stack(Level) + 1 If Level And 1 Then N = 1 Else N = Stack(Level) Swap Perm(N), Perm(Level) Level = 2 End If NextPerm = True End Function Sub Swap(A As Long, B As Long) A = A Xor B B = A Xor B A = A Xor B End Sub 'This is just for testing. Private Sub Form_Paint() Const Max = 8 Dim A(1 To Max) As Long, I As Long Dim S(3 To Max) As Long, J As Long Dim Test As New Collection, T As String For I = 1 To UBound(A) A(I) = I Next Cls ScaleLeft = 0 J = 2 Do If CurrentY + TextHeight("0") > ScaleHeight Then ScaleLeft = ScaleLeft - TextWidth(" 0 ") * (UBound(A) + 1) CurrentY = 0 CurrentX = 0 End If T = vbNullString For I = 1 To UBound(A) Print A(I); T = T & Hex(A(I)) Next Print Test.Add Null, T Loop While NextPerm(A, S, J) J = 1 For I = 2 To UBound(A) J = J * I Next If J <> Test.Count Then Stop End Sub Other methods are described by various authors. Knuth describes two, one gives lexical order, but is complex and slow, the other is known as the method of plain changes. Jie Gao and Dianjun Wang also wrote an interesting paper. A: Here is a link that describes how to print permutations of a string. http://nipun-linuxtips.blogspot.in/2012/11/print-all-permutations-of-characters-in.html A: You are going to get a lot of strings, that's for sure... Where x and y is how you define them and r is the number of characters we are selecting from --if I am understanding you correctly. You should definitely generate these as needed and not get sloppy and say, generate a powerset and then filter the length of strings. The following definitely isn't the best way to generate these, but it's an interesting aside, none-the-less. Knuth (volume 4, fascicle 2, 7.2.1.3) tells us that (s,t)-combination is equivalent to s+1 things taken t at a time with repetition -- an (s,t)-combination is notation used by Knuth that is equal to . We can figure this out by first generating each (s,t)-combination in binary form (so, of length (s+t)) and counting the number of 0's to the left of each 1. 10001000011101 --> becomes the permutation: {0, 3, 4, 4, 4, 1} A: This code in python, when called with allowed_characters set to [0,1] and 4 character max, would generate 2^4 results: ['0000', '0001', '0010', '0011', '0100', '0101', '0110', '0111', '1000', '1001', '1010', '1011', '1100', '1101', '1110', '1111'] def generate_permutations(chars = 4) : #modify if in need! allowed_chars = [ '0', '1', ] status = [] for tmp in range(chars) : status.append(0) last_char = len(allowed_chars) rows = [] for x in xrange(last_char ** chars) : rows.append("") for y in range(chars - 1 , -1, -1) : key = status[y] rows[x] = allowed_chars[key] + rows[x] for pos in range(chars - 1, -1, -1) : if(status[pos] == last_char - 1) : status[pos] = 0 else : status[pos] += 1 break; return rows import sys print generate_permutations() Hope this is of use to you. Works with any character, not only numbers A: Many of the previous answers used backtracking. This is the asymptotically optimal way O(n*n!) of generating permutations after initial sorting class Permutation { /* runtime -O(n) for generating nextPermutaion * and O(n*n!) for generating all n! permutations with increasing sorted array as start * return true, if there exists next lexicographical sequence * e.g [a,b,c],3-> true, modifies array to [a,c,b] * e.g [c,b,a],3-> false, as it is largest lexicographic possible */ public static boolean nextPermutation(char[] seq, int len) { // 1 if (len <= 1) return false;// no more perm // 2: Find last j such that seq[j] <= seq[j+1]. Terminate if no such j exists int j = len - 2; while (j >= 0 && seq[j] >= seq[j + 1]) { --j; } if (j == -1) return false;// no more perm // 3: Find last l such that seq[j] <= seq[l], then exchange elements j and l int l = len - 1; while (seq[j] >= seq[l]) { --l; } swap(seq, j, l); // 4: Reverse elements j+1 ... count-1: reverseSubArray(seq, j + 1, len - 1); // return seq, add store next perm return true; } private static void swap(char[] a, int i, int j) { char temp = a[i]; a[i] = a[j]; a[j] = temp; } private static void reverseSubArray(char[] a, int lo, int hi) { while (lo < hi) { swap(a, lo, hi); ++lo; --hi; } } public static void main(String[] args) { String str = "abcdefg"; char[] array = str.toCharArray(); Arrays.sort(array); int cnt=0; do { System.out.println(new String(array)); cnt++; }while(nextPermutation(array, array.length)); System.out.println(cnt);//5040=7! } //if we use "bab"-> "abb", "bab", "bba", 3(#permutations) } A: Non recursive solution according to Knuth, Python example: def nextPermutation(perm): k0 = None for i in range(len(perm)-1): if perm[i]<perm[i+1]: k0=i if k0 == None: return None l0 = k0+1 for i in range(k0+1, len(perm)): if perm[k0] < perm[i]: l0 = i perm[k0], perm[l0] = perm[l0], perm[k0] perm[k0+1:] = reversed(perm[k0+1:]) return perm perm=list("12345") while perm: print perm perm = nextPermutation(perm) A: You might look at "Efficiently Enumerating the Subsets of a Set", which describes an algorithm to do part of what you want - quickly generate all subsets of N characters from length x to y. It contains an implementation in C. For each subset, you'd still have to generate all the permutations. For instance if you wanted 3 characters from "abcde", this algorithm would give you "abc","abd", "abe"... but you'd have to permute each one to get "acb", "bac", "bca", etc. A: Some working Java code based on Sarp's answer: public class permute { static void permute(int level, String permuted, boolean used[], String original) { int length = original.length(); if (level == length) { System.out.println(permuted); } else { for (int i = 0; i < length; i++) { if (!used[i]) { used[i] = true; permute(level + 1, permuted + original.charAt(i), used, original); used[i] = false; } } } } public static void main(String[] args) { String s = "hello"; boolean used[] = {false, false, false, false, false}; permute(0, "", used, s); } } A: Here is a simple solution in C#. It generates only the distinct permutations of a given string. static public IEnumerable<string> permute(string word) { if (word.Length > 1) { char character = word[0]; foreach (string subPermute in permute(word.Substring(1))) { for (int index = 0; index <= subPermute.Length; index++) { string pre = subPermute.Substring(0, index); string post = subPermute.Substring(index); if (post.Contains(character)) continue; yield return pre + character + post; } } } else { yield return word; } } A: There are a lot of good answers here. I also suggest a very simple recursive solution in C++. #include <string> #include <iostream> template<typename Consume> void permutations(std::string s, Consume consume, std::size_t start = 0) { if (start == s.length()) consume(s); for (std::size_t i = start; i < s.length(); i++) { std::swap(s[start], s[i]); permutations(s, consume, start + 1); } } int main(void) { std::string s = "abcd"; permutations(s, [](std::string s) { std::cout << s << std::endl; }); } Note: strings with repeated characters will not produce unique permutations. A: Recursive Approach func StringPermutations(inputStr string) (permutations []string) { for i := 0; i < len(inputStr); i++ { inputStr = inputStr[1:] + inputStr[0:1] if len(inputStr) <= 2 { permutations = append(permutations, inputStr) continue } leftPermutations := StringPermutations(inputStr[0 : len(inputStr)-1]) for _, leftPermutation := range leftPermutations { permutations = append(permutations, leftPermutation+inputStr[len(inputStr)-1:]) } } return } A: Though this doesn't answer your question exactly, here's one way to generate every permutation of the letters from a number of strings of the same length: eg, if your words were "coffee", "joomla" and "moodle", you can expect output like "coodle", "joodee", "joffle", etc. Basically, the number of combinations is the (number of words) to the power of (number of letters per word). So, choose a random number between 0 and the number of combinations - 1, convert that number to base (number of words), then use each digit of that number as the indicator for which word to take the next letter from. eg: in the above example. 3 words, 6 letters = 729 combinations. Choose a random number: 465. Convert to base 3: 122020. Take the first letter from word 1, 2nd from word 2, 3rd from word 2, 4th from word 0... and you get... "joofle". If you wanted all the permutations, just loop from 0 to 728. Of course, if you're just choosing one random value, a much simpler less-confusing way would be to loop over the letters. This method lets you avoid recursion, should you want all the permutations, plus it makes you look like you know Maths(tm)! If the number of combinations is excessive, you can break it up into a series of smaller words and concatenate them at the end. A: c# iterative: public List<string> Permutations(char[] chars) { List<string> words = new List<string>(); words.Add(chars[0].ToString()); for (int i = 1; i < chars.Length; ++i) { int currLen = words.Count; for (int j = 0; j < currLen; ++j) { var w = words[j]; for (int k = 0; k <= w.Length; ++k) { var nstr = w.Insert(k, chars[i].ToString()); if (k == 0) words[j] = nstr; else words.Add(nstr); } } } return words; } A: A recursive solution in python. The good thing about this code is that it exports a dictionary, with keys as strings and all possible permutations as values. All possible string lengths are included, so in effect, you are creating a superset. If you only require the final permutations, you can delete other keys from the dictionary. In this code, the dictionary of permutations is global. At the base case, I store the value as both possibilities in a list. perms['ab'] = ['ab','ba']. For higher string lengths, the function refers to lower string lengths and incorporates the previously calculated permutations. The function does two things: * *calls itself with a smaller string *returns a list of permutations of a particular string if already available. If returned to itself, these will be used to append to the character and create newer permutations. Expensive for memory. perms = {} def perm(input_string): global perms if input_string in perms: return perms[input_string] # This will send a list of all permutations elif len(input_string) == 2: perms[input_string] = [input_string, input_string[-1] + input_string [-2]] return perms[input_string] else: perms[input_string] = [] for index in range(0, len(input_string)): new_string = input_string[0:index] + input_string[index +1:] perm(new_string) for entries in perms[new_string]: perms[input_string].append(input_string[index] + entries) return perms[input_string] A: def gen( x,y,list): #to generate all strings inserting y at different positions list = [] list.append( y+x ) for i in range( len(x) ): list.append( func(x,0,i) + y + func(x,i+1,len(x)-1) ) return list def func( x,i,j ): #returns x[i..j] z = '' for i in range(i,j+1): z = z+x[i] return z def perm( x , length , list ): #perm function if length == 1 : # base case list.append( x[len(x)-1] ) return list else: lists = perm( x , length-1 ,list ) lists_temp = lists #temporarily storing the list lists = [] for i in range( len(lists_temp) ) : list_temp = gen(lists_temp[i],x[length-2],lists) lists += list_temp return lists A: Recursive Solution with driver main() method. public class AllPermutationsOfString { public static void stringPermutations(String newstring, String remaining) { if(remaining.length()==0) System.out.println(newstring); for(int i=0; i<remaining.length(); i++) { String newRemaining = remaining.replaceFirst(remaining.charAt(i)+"", ""); stringPermutations(newstring+remaining.charAt(i), newRemaining); } } public static void main(String[] args) { String string = "abc"; AllPermutationsOfString.stringPermutations("", string); } } A: def permutation(str) posibilities = [] str.split('').each do |char| if posibilities.size == 0 posibilities[0] = char.downcase posibilities[1] = char.upcase else posibilities_count = posibilities.length posibilities = posibilities + posibilities posibilities_count.times do |i| posibilities[i] += char.downcase posibilities[i+posibilities_count] += char.upcase end end end posibilities end Here is my take on a non recursive version A: The pythonic solution: from itertools import permutations s = 'ABCDEF' p = [''.join(x) for x in permutations(s)] A: Well here is an elegant, non-recursive, O(n!) solution: public static StringBuilder[] permutations(String s) { if (s.length() == 0) return null; int length = fact(s.length()); StringBuilder[] sb = new StringBuilder[length]; for (int i = 0; i < length; i++) { sb[i] = new StringBuilder(); } for (int i = 0; i < s.length(); i++) { char ch = s.charAt(i); int times = length / (i + 1); for (int j = 0; j < times; j++) { for (int k = 0; k < length / times; k++) { sb[j * length / times + k].insert(k, ch); } } } return sb; } A: code written for java language : package namo.algorithms; import java.util.Scanner; public class Permuations { public static int totalPermutationsCount = 0; public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("input string : "); String inputString = sc.nextLine(); System.out.println("given input String ==> "+inputString+ " :: length is = "+inputString.length()); findPermuationsOfString(-1, inputString); System.out.println("**************************************"); System.out.println("total permutation strings ==> "+totalPermutationsCount); } public static void findPermuationsOfString(int fixedIndex, String inputString) { int currentIndex = fixedIndex +1; for (int i = currentIndex; i < inputString.length(); i++) { //swap elements and call the findPermuationsOfString() char[] carr = inputString.toCharArray(); char tmp = carr[currentIndex]; carr[currentIndex] = carr[i]; carr[i] = tmp; inputString = new String(carr); //System.out.println("chat At : current String ==> "+inputString.charAt(currentIndex)); if(currentIndex == inputString.length()-1) { totalPermutationsCount++; System.out.println("permuation string ==> "+inputString); } else { //System.out.println("in else block>>>>"); findPermuationsOfString(currentIndex, inputString); char[] rarr = inputString.toCharArray(); char rtmp = carr[i]; carr[i] = carr[currentIndex]; carr[currentIndex] = rtmp; inputString = new String(carr); } } } } A: The possible string permutations can be computed using recursive function. Below is one of the possible solution. public static String insertCharAt(String s, int index, char c) { StringBuffer sb = new StringBuffer(s); StringBuffer sbb = sb.insert(index, c); return sbb.toString(); } public static ArrayList<String> getPerm(String s, int index) { ArrayList<String> perm = new ArrayList<String>(); if (index == s.length()-1) { perm.add(String.valueOf(s.charAt(index))); return perm; } ArrayList<String> p = getPerm(s, index+1); char c = s.charAt(index); for(String pp : p) { for (int idx=0; idx<pp.length()+1; idx++) { String ss = insertCharAt(pp, idx, c); perm.add(ss); } } return perm; } public static void testGetPerm(String s) { ArrayList<String> perm = getPerm(s,0); System.out.println(s+" --> total permutation are :: "+perm.size()); System.out.println(perm.toString()); }
{ "language": "en", "url": "https://stackoverflow.com/questions/361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "164" }
Q: How do you make sure email you send programmatically is not automatically marked as spam? This is a tricky one and I've always relied on techniques, such as permission-based emails (i.e. only sending to people you have permission to send to) and not using blatantly spamish terminology. Of late, some of the emails I send out programmatically have started being shuffled into people's spam folder automatically and I'm wondering what I can do about it. This is despite the fact that these particular emails are not ones that humans would mark as spam, specifically, they are emails that contain license keys that people have paid good money for, so I don't think they're going to consider them spam I figure this is a big topic in which I am essentially an ignorant simpleton. A: In the UK it's also best practice to include a real physical address for your company and its registered number. That way it's all open and honest and they're less likely to manually mark it as spam. A: I would add : Provide real unsubscription upon click on "Unsubscribe". I've seen real newsletters providing a dummy unsubscription link that upon click shows " has been unsubscribed successfully" but I will still receive further newsletters. A: The most important thing you can do is to make sure that the people you are sending email to are not likely going to hit the "Spam" button when they receive your email. So, stick to the following rules of thumb: * *Make sure you have permission from the people you are sending email to. Don't ever send email to someone who did not request it from you. *Clearly identify who you are right at the top of each message, and why the person is receiving the email. *At least once a month, send out a reminder email to people on your list (if you are running a list), forcing them to opt back in to the list in order to keep receiving communications from you. Yes, this will mean your list gets shorter over time, but the up-side is that the people on your list are "bought in" and will be less likely to flag your email. *Keep your content highly relevant and useful. *Give people an easy way to opt out of further communications. *Use an email sending service like SendGrid that works hard to maintain a good IP reputation. *Avoid using short links - these are often blacklisted. Following these rules of thumb will go a long way. A: I have had the same problem in the past on many sites I have done here at work. The only guaranteed method of making sure the user gets the email is to advise the user to add you to there safe list. Any other method is really only going to be something that can help with it and isn't guaranteed. A: It could very well be the case that people who sign up for your service are entering emails with typing mistakes that you do not correct. For example: chris@gmial.com -or- james@hotnail.com. And such domains are configured to be used as spamtraps which will automatically flag your email server's IP and/or domain and hurt its reputation. To avoid this, do a double-check for the email address that is entered upon your product subscription. Also, send a confirmation email to really ensure that this email address is 100% validated by a human being that is entering the confirmation email, before you send them the product key or accept their subscription. The verification email should require the recipient to click a link or reply in order to really confirm that the owner of the mailbox is the person who signed up. A: It sounds like you are depending on some feedback to determine what is getting stuck on the receiving end. You should be checking the outbound mail yourself for obvious "spaminess". Buy any decent spam control system, and send your outbound mail through it. If you send any decent volume of mail, you should be doing this anyhow, because of the risk of sending outbound viruses, especially if you have desktop windows users. Proofpoint had spam + anti-virus + some reputation services in a single deployment, for example. (I used to work there, so I happen to know this off the top of my head. I'm sure other vendors in this space have similar features.) But you get the idea. If you send your mail through a basic commerical spam control setup, and it doesn't pass, it shouldn't be going out of your network. Also, there are some companies that can assist you with increasing delivery rates of non-spam, outbound email, like Habeas. A: Google has a tool and guidelines for this. You can find them on: https://postmaster.google.com/ Register and verify your domain name and Google provides an individual scoring of that IP-address and domain. From the bulk senders guidelines: Authentication ensures that your messages can be correctly classified. Emails that lack authentication are likely to be rejected or placed in the spam folder, given the high likelihood that they are forged messages used for phishing scams. In addition, unauthenticated emails with attachments may be outrightly rejected, for security reasons. To ensure that Gmail can identify you: * *Use a consistent IP address to send bulk mail. *Keep valid reverse DNS records for the IP address(es) from which you send mail, pointing to your domain. *Use the same address in the 'From:' header on every bulk mail you send. We also recommend the following: *Sign messages with DKIM. We do not authenticate messages signed with keys using fewer than 1024 bits. *Publish an SPF record. *Publish a DMARC policy. A: Use email authentication methods, such as SPF, and DKIM to prove that your emails and your domain name belong together, and to prevent spoofing of your domain name. The SPF website includes a wizard to generate the DNS information for your site. Check your reverse DNS to make sure the IP address of your mail server points to the domain name that you use for sending mail. Make sure that the IP-address that you're using is not on a blacklist Make sure that the reply-to address is a valid, existing address. Use the full, real name of the addressee in the To field, not just the email-address (e.g. "John Smith" <john@blacksmiths-international.com> ). Monitor your abuse accounts, such as abuse@yourdomain.example and postmaster@yourdomain.example. That means - make sure that these accounts exist, read what's sent to them, and act on complaints. Finally, make it really easy to unsubscribe. Otherwise, your users will unsubscribe by pressing the spam button, and that will affect your reputation. That said, getting Hotmail to accept your emails remains a black art. A: Sign up for an account on as many major email providers as possible (gmail/yahoo/hotmail/aol/etc). If you make changes to your emails, either major rewording, changes to the code that sends the emails, changes to your email servers, etc, make sure to send test messages to all your accounts and verify that they are not being marked as spam. A: I always use: https://www.mail-tester.com/ It gives me feedback on the technical part of sending an e-mail. Like SPF-records, DKIM, Spamassassin score and so on. Even though I know what is required, I continuously make errors and mail-tester.com makes it easy to figure out what could be wrong. A: A few bullet points from a previous answer: * *Most important: Does the sender address ("From") belong to a domain that runs on the server you send the E-Mail from? If not, make it so. Never use sender addresses like xxx@gmail.com. User reply-to if you need replies to arrive at a different address. *Is your server on a blacklist (e.g. check IP on spamhaus.org)? This is a possibility when you're on shared hosting when neighbours behave badly. *Are mails filtered by a spam filter? Open an account with a freemailer that has a spam folder and find out. Also, try sending mail to an address without any spam filtering at all. *Do you possibly need the fifth parameter "-f" of mail() to add a sender address? (See mail() command in the PHP manual) *If you have access to log files, check those, of course. *Do you check the "from:" address for possible bounce mails ("Returned to sender")? You can also set up a separate "errors-to" address. A: You can tell your users to add your From address to their contacts when they complete their order, which, if they do so, will help a lot. Otherwise, I would try to get a log from some of your users. Sometimes they have details about why it was flagged as spam in the headers of the message, which you could use to tweak the text. Other things you can try: * *Put your site name or address in the subject *Keep all links in the message pointing to your domain (and not email.com) *Put an address or other contact information in the email A: Confirm that you have the correct email address before sending out emails. If someone gives the wrong email address on sign-up, beat them over the head about it ASAP. Always include clear "how to unsubscribe" information in EVERY email. Do not require the user to login to unsubscribe, it should be a unique url for 1-click unsubscribe. This will prevent people from marking your mails as spam because "unsubscribing" is too hard. A: First of all, you need to ensure the required email authentication mechanisms like SPF and DKIM are in place. These two are prominent ways of proving that you were the actual sender of an email and it's not really spoofed. This reduces the chances of emails getting filtered as spam. Second thing is, you can check the reverse DNS output of your domain name against different DNSBLs. Use below simple command on terminal: **dig a +short (domain-name).(blacklist-domain-name)** ie. dig a +short example.com.dsn.rfc-clueless.org > 127.0.0.2 In the above examples, this means your domain "example.com" is listed in blacklist but due to Domain Setting Compliance(rfc-clueless.org list domain which has compliance issue ) note: I prefer multivalley and pepipost tool for checking the domain listings. The from address/reply-to-id should be proper, always use visible unsubscribe button within your email body (this will help your users to sign out from your email-list without killing your domain reputation) A: The intend of most of the programmatically generated emails is generally transactional, triggered or alert n nature- which means these are important emails which should never land into spam. Having said that there are multiple parameters which are been considered before flagging an email as spam. While Quality of email list is the most important parameter to be considered, but I am skipping that here from the discussion because here we are talking about important emails which are sent to either ourself or to known email addresses. Apart from list quality, the other 3 important parameters are; * *Sender Reputation *Compliance with Email Standards and Authentication (SPF, DKIM, DMARC, rDNS) *Email content Sender Reputation = Reputation of Sending IP address + Reputation of Return Path/Envelope domain + Reputation of From Domain. There is no straight answer to what is your Sender Reputation. This is because there are multiple authorities like SenderScore, Reputation Authority and so on who maintains the reputation score for your domain. Apart from that ISPs like Gmail, Yahoo, Outlook also maintains the reputation of each domain at their end. But, you can use free tools like GradeMyEmail to get a 360-degree view of your reputation and potential problems with your email settings or any other compliance-related issue too. Sometimes, if you're using a new domain for sending an email, then those are also found to land in spam. You should be checking whether your domain is listed on any of the global blocklists or not. Again GradeMyEmail and MultiRBL are useful tools to identify the list of blocklists. Once you're pretty sure with the sender reputation score, you should check whether your email sending domain complies with all email authentications and standards. * *SPF *DKIM *DMARC *Reverse DNS For this, you can again use GradeMyEmail or MXToolbox to know the potential problems with your authentication. Your SPF, DKIM and DMARC should always PASS to ensure, your emails are complying with the standard email authentications. Here's an example of how these authentications should look like in Gmail: Similarly, you can use tools like Mail-Tester which scans the complete email content and tells the potential keywords which can trigger spam filters. A: In addition to all of the other answers, if you are sending HTML emails that contain URLs as linking text, make sure that the URL matches the linking text. I know that Thunderbird automatically flags them as being a scam if not. The wrong way: Go to your account now: <a href="http://www.paypal.com.phishers-anonymous.org/">http://www.paypal.com</a> The right way: Go to your account now: <a href="http://www.yourdomain.org/">http://www.yourdomain.org</a> Or use an unrelated linking text instead of a URL: <a href="http://www.yourdomain.org/">Click here to go to your account</a> A: You may consider a third party email service who handles delivery issues: * *Exact Target *Vertical Response *Constant Contact *Campaign Monitor *Emma *Return Path *IntelliContact *SilverPop A: Delivering email can be like black magic sometimes. The reverse DNS is really important. I have found it to be very helpful to carefully track NDRs. I direct all of my NDRs to a single address and I have a windows service parsing them out (Google ListNanny). I put as much information from the NDR as I can into a database, and then I run reports on it to see if I have suddenly started getting blocked by a certain domain. Also, you should avoid sending emails to addresses that were previously marked as NDR, because that's generally a good indication of spam. If you need to send out a bunch of customer service emails at once, it's best to put a delay in between each one, because if you send too many nearly identical emails to one domain at a time, you are sure to wind up on their blacklist. Some domains are just impossible to deliver to sometimes. Comcast.net is the worst. Make sure your IPs aren't listed on sites like http://www.mxtoolbox.com/blacklists.aspx. A: I hate to tell you, but I and others may be using white-list defaults to control our filtering of spam. This means that all e-mail from an unknown source is automatically spam and diverted into a spam folder. (I don't let my e-mail service delete spam, because I want to always review the arrivals for false positives, something that is pretty easy to do by a quick scan of the folder.) I even have e-mail from myself go to the spam bucket because (1) I usually don't send e-mail to myself and (2) there are spammers that fake my return address in spam sent to me. So to get out of the spam designation, I have to consider that your mail might be legitimate (from sender and subject information) and open it first in plaintext (my default for all incoming mail, spam or not) to see if it is legitimate. My spam folder will not use any links in e-mails so I am protected against tricky image links and other misbehavior. If I want future arrivals from the same source to go to my in box and not be diverted for spam review, I will specify that to my e-mail client. For those organizations that use bulk-mail forwarders and unique sender addresses per mail piece, that's too bad. They never get my approval and always show up in my spam folder, and if I'm busy I will never look at them. Finally, if an e-mail is not legible in plaintext, even when sent as HTML, I am likely to just delete it unless it is something that I know is of interest to me by virtue of the source and previous valuable experiences. As you can see, it is ultimately under an users control and there is no automated act that will convince such a system that your mail is legitimate from its structure alone. In this case, you need to play nice, don't do anything that is similar to phishing, and make it easy for users willing to trust your mail to add you to their white list. A: Yahoo uses a method called Sender ID, which can be configured at The SPF Setup Wizard and entered in to your DNS. Also one of the important ones for Exchange, Hotmail, AOL, Yahoo, and others is to have a Reverse DNS for your domain. Those will knock out most of the issues. However you can never prevent a person intentionally blocking your or custom rules. A: one of my application's emails was constantly being tagged as spam. it was html with a single link, which i sent as html in the body with a text/html content type. my most successful resolution to this problem was to compose the email so it looked like it was generated by an email client. i changed the email to be a multipart/alternative mime document and i now generate both text/plain and text/html parts. the email no longer is detected as junk by outlook. A: You need a reverse DNS entry. You need to not send the same content to the same user twice. You need to test it with some common webmail and email clients. Personally I ran mine through a freshly installed spam assassin, a trained spam assassin, and multiple hotmail, gmail, and aol accounts. But have you seen that spam that doesn't seem to link to or advertise anything? That's a spammer trying to affect your Bayesian filter. If he can get a high rating and then include some words that would be in his future emails it might be automatically learned as good. So you can't really guess what a user's filter is going to be set as at the time of your mailing. Lastly, I did not sort my list by the domains, but randomized it. A: I've found that using the recipients real first and last name in the body is a sure fire way of getting through a spam filter. A: To allow DMARC checks for SPF to pass and also be aligned when using sendmail, make sure you are setting the envelope sender address (-f or -r parameter) to something that matches the domain in the From: header address. With PHP: Using PHP's built-in mail() function without setting the 5th paramater will cause DMARC SPF checks to be unaligned if not done correctly. By default, sendmail will send the email with the webserver's user as the RFC5321.MailFrom / Return Path header. For example, say you are hosting your website domain.com on the host.com web server. If you do not set the additional parameters parameter: mail($to,$subject,$message,$headers); // Wrong way The email recipient will receive an email with the following mail headers: Return-Path: <your-website-user@server.host.com> From: <your-website-user@domain.com> Even though this passes SPF checks, it will be unaligned (since domain.com and host.com do not match), which means that DMARC SPF check will fail as unaligned. Instead, you must pass the envelope sender address to sendmail by including the 5th parameter in the PHP mail() function, for example: mail($to,$subject,$message,$headers, '-r bounce_email@domain.com'); // Right way In this case, the email recipient will receive an email with the following mail headers: Return-Path: <bounce_email@domain.com> From: <your-website-user@domain.com> Since both of these headers contain addresses from domain.com, SPF will pass and also be aligned, which means that DMARC will also pass the SPF check.
{ "language": "en", "url": "https://stackoverflow.com/questions/371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "461" }
Q: What is the meaning of the type safety warning in certain Java generics casts? What is the meaning of the Java warning? Type safety: The cast from Object to List<Integer> is actually checking against the erased type List I get this warning when I try to cast an Object to a type with generic information, such as in the following code: Object object = getMyList(); List<Integer> list = (List<Integer>) object; A: This warning is there because Java is not actually storing type information at run-time in an object that uses generics. Thus, if object is actually a List<String>, there will be no ClassCastException at run-time except until an item is accessed from the list that doesn't match the generic type defined in the variable. This can cause further complications if items are added to the list, with this incorrect generic type information. Any code still holding a reference to the list but with the correct generic type information will now have an inconsistent list. To remove the warning, try: List<?> list = (List<?>) object; However, note that you will not be able to use certain methods such as add because the compiler doesn't know if you are trying to add an object of incorrect type. The above will work in a lot of situations, but if you have to use add, or some similarly restricted method, you will just have to suffer the yellow underline in Eclipse (or a SuppressWarning annotation).
{ "language": "en", "url": "https://stackoverflow.com/questions/382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: Lucene Score results In Lucene if you had multiple indexes that covered only one partition each. Why does the same search on different indexes return results with different scores? The results from different servers match exactly. i.e. if I searched for : * *Name - John Smith *DOB - 11/11/1934 Partition 0 would return a score of 0.345 Partition 1 would return a score of 0.337 Both match exactly on name and DOB. A: You may also be interested in the output of the explain() method, and the resulting Explanation object, which will give you an idea of how things are scored the way they are. A: The scoring contains the Inverse Document Frequency(IDF). If the term "John Smith" is in one partition, 0, 100 times and in partition 1, once. The score for searching for John Smith would be higher search in partition 1 as the term is more scarce. To get round this you would wither have to have your index being over all partitions, or you would need to override the IDF. A: Because the score is determined on the index if I am not completely mistaken. If you have different indexes (more/less or different data that was indexed), the score will differ: http://lucene.apache.org/core/3_6_0/scoring.html (Warning: Contains Math :-))
{ "language": "en", "url": "https://stackoverflow.com/questions/387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: iPhone app in landscape mode, 2008 systems Please note that this question is from 2008 and now is of only historic interest. What's the best way to create an iPhone application that runs in landscape mode from the start, regardless of the position of the device? Both programmatically and using the Interface Builder. A: First I set in info.plist <key>UIInterfaceOrientation</key> <string>UIInterfaceOrientationLandscapeRight</string> then I put this code in applicationDidFinishLaunching: CGAffineTransform rotate = CGAffineTransformMakeRotation(1.57079633); [window setTransform:rotate]; CGRect contentRect = CGRectMake(0, 0, 480, 320); window.bounds = contentRect; [window setCenter:CGPointMake(160.0f, 240.0f)]; This way I can work on the view in Interface Builder in landscape mode. A: sasb's and michaelpryor's answer appears to be correct, but if it's not working for you, try this alternative: - (void)applicationDidFinishLaunchingUIApplication *)application { application.statusBarOrientation = UIInterfaceOrientationLandscapeRight; } Or this one: [[UIDevice currentDevice] setOrientation:UIInterfaceOrientationLandscapeRight]; Or this one: [application setStatusBarOrientation: UIInterfaceOrientationLandscapeRight animated:NO]; You may also have to call window makeKeyAndVisible; first. A few links: Developing in landscape mode, iPhone SDK: How to force Landscape mode only? @Robert: please refer to The iPhone SDK, NDA, and Stack Overflow. A: I'm surprised no one has come up with this answer yet: In all my tests when a dismissing a modal view controller the parent view controller's preferred orientation set in shouldAutorotateToInterfaceOrientation is honored even when part of a UINavigationController. So the solution to this is simple: Create a dummy UIViewController with a UIImageView for a background. Set the image to the default.png image your app uses on startup. When viewWillAppear gets called in your root view controller, just present the dummy view controller without animation. when viewDidAppear gets called in your dummy view controller, dismiss the view controller with a nice cross dissolve animation. Not only does this work, but it looks good! BTW, just for clarification i do the root view controller's viewWillAppear like this: - (void)viewWillAppear:(BOOL)animated { if ( dummy != nil ) { [dummy setModalTransitionStyle:UIModalTransitionStyleCrossDissolve]; [self presentModalViewController:dummy animated:NO]; [dummy release]; dummy = nil; } ... } A: The latest iPhone OS Programming Guide has a full section on this, with sample code. I am sure this is a recent addition, so maybe you missed it. It explains all the conditions you have to comply with; basically... * *set the Info.plist properties (this changes the position of the status bar, but not the view) *rotate your view manually around its center, on either your UIViewController viewDidLoad: method or your applicationDidFinishLaunching: method or implement auto rotation ("Autoresizing behaviors", page 124) Look for "Launching in Landscape Mode", page 102. A: Historic answer only. Spectacularly out of date. Please note that this answer is now hugely out of date/ This answer is only a historical curiosity. Exciting news! As discovered by Andrew below, this problem has been fixed by Apple in 4.0+. It would appear it is NO longer necessary to force the size of the view on every view, and the specific serious problem of landscape "only working the first time" has been resolved. As of April 2011, it is not possible to test or even build anything below 4.0, so the question is purely a historic curiosity. It's incredible how much trouble it caused developers for so long! Here is the original discussion and solution. This is utterly irrelevant now, as these systems are not even operable. It is EXTREMELY DIFFICULT to make this work fully -- there are at least three problems/bugs at play. try this .. interface builder landscape design Note in particular that where it says "and you need to use shouldAutorotateToInterfaceOrientation properly everywhere" it means everywhere, all your fullscreen views. Hope it helps in this nightmare! An important reminder of the ADDITIONAL well-known problem at hand here: if you are trying to swap between MORE THAN ONE view (all landscape), IT SIMPLY DOES NOT WORK. It is essential to remember this or you will waste days on the problem. It is literally NOT POSSIBLE. It is the biggest open, known, bug on the iOS platform. There is literally no way to make the hardware make the second view you load, be landscape. The annoying but simple workaround, and what you must do, is have a trivial master UIViewController that does nothing but sit there and let you swap between your views. In other words, in iOS because of a major know bug: [window addSubview:happyThing.view]; [window makeKeyAndVisible]; You can do that only once. Later, if you try to remove happyThing.view, and instead put in there newThing.view, IT DOES NOT WORK - AND THAT'S THAT. The machine will never rotate the view to landscape. There is no trick fix, even Apple cannot make it work. The workaround you must adopt is having an overall UIViewController that simply sits there and just holds your various views (happyThing, newThing, etc). Hope it helps! A: From the Apple Dev Site: To start your application in landscape mode so that the status bar is in the appropriate position immediately, edit your Info.plist file to add the UIInterfaceOrientation key with the appropriate value (UIInterfaceOrientationLandscapeRight or UIInterfaceOrientationLandscapeLeft), as shown in Listing 2. Listing 2: Starting your application in landscape mode <key>UIInterfaceOrientation</key> <string>UIInterfaceOrientationLandscapeRight</string> A: See this answer: Landscape Mode ONLY for iPhone or iPad * *add orientation to plist *shouldAutorotateToInterfaceOrientation = YES in all files Although if you're using mixed modes, you might be better off with [[UIDevice currentDevice] setOrientation:UIInterfaceOrientationLandscapeRight]; A: Summary and integration from all the posts, after testing it myself; check the update for 4.x, 5.x below. As of 3.2 you cannot change the orientation of a running application from code. But you can start an application with a fixed orientation, although doing so this is not straightforward. Try with this recipe: * *set your orientation to UISupportedInterfaceOrientations in the Info.plist file *in your window define a 480x320 "base view controller". Every other view will be added as a subview to its view. *in all view controllers set up the shouldAutorotateToInterfaceOrientation: method (to return the same value you defined in the plist, of course) *in all view controllers set a background view with self.view.frame = CGRectMake(0, 0, 480, 320) in the viewDidLoad method. Update (iOS 4.x, 5.x): the Apple iOS App Programming Guide has a "Launching in Landscape Mode" paragraph in the "Advanced App Tricks" chapter. References: * *interface builder landscape design *interface builder landscape design-1
{ "language": "en", "url": "https://stackoverflow.com/questions/402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "100" }
Q: Unload a COM control when working in VB6 IDE Part of my everyday work is maintaining and extending legacy VB6 applications. A common engine is written in C/C++ and VB6 uses these functions in order to improve performance. When it comes to asynchronous programming, a C interface is not enough and we rely on COM controls to fire events to VB6. My problem is that when I register the control in VB6, VB loads this control in memory and does not unload it until I quit the VB6 IDE. As the control is loaded the whole time, I am unable to recompile it in VC6, because the DLL file is locked. A solution I found is not to enable the control in VB but use the CreateObject() with the full name of my control. The problem then is that I must declare my control as an Object because VB6 knows nothing of the interface I am using and I do not have access to IntelliSense, which is a pain. Any idea how I can tell VB6 to unload controls after quitting the application or directly in the IDE ? A: I'm pretty sure there's no good way to force VB6 to unload the control. Here's what I do... instead of running Visual C and Visual Basic side-by-side, run VB6 under VC : * *Load up VC *Open the project containing your COM objects *Edit, change, etc. *In VC, set the Output Executable to be VB6.EXE with appropriate command-line arguments to load the VB6 workspace *Now just hit F5 to launch the VB6 IDE and load your VB6 project *When you want to change the COM code again, exit VB6.EXE, make your changes, and hit F5 again. As long as you save your workspace VB6 will remember what windows you had open and all your project settings. Advantages of this method: * *You can set breakpoints in the COM object and debug it using a full source debugger *You can happily debug in C and VB at the same time *Whenever VB6 is running it always has the latest version of the COM DLLs
{ "language": "en", "url": "https://stackoverflow.com/questions/419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Programmatically launch IE Mobile favorites screen Is there any way to launch IE Mobile's "Favorites" screen directly by specifying any command line parameter? A: How about running IE with the HTML favorites file as a parameter? IExplore file://\windows\fav.htm A: I think this is going to be quite difficult without code. Two options come to mind: * *Find out what Windows messages IE sends to open the favorites screen and replay these in your application. You would first need to see if IE is running and if it is bring it to the foreground. If not then start the process. Maybe you can use Windows CE Remote Spy to find the right Window and information about the Favorites button? *Other option is to work against the place where IE stores it's favorites information. You would have to write your own UI to parse the favorites etc. A: Create a link file with this line: 26#"\Windows\iexplore.exe" -f
{ "language": "en", "url": "https://stackoverflow.com/questions/427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Implementation of "Remember me" in a Rails application My Rails-app has a sign in box with a "remember me" checkbox. Users who check that box should remain logged in even after closing their browser. I'm keeping track of whether users are logged in by storing their id in the user's session. But sessions are implemented in Rails as session cookies, which are not persistent. I can make them persistent: class ApplicationController < ActionController::Base before_filter :update_session_expiration_date private def update_session_expiration_date options = ActionController::Base.session_options unless options[:session_expires] options[:session_expires] = 1.year.from_now end end end But that seems like a hack, which is surprising for such common functionality. Is there any better way? Edit Gareth's answer is pretty good, but I would still like an answer from someone familiar with Rails 2 (because of it's unique CookieSessionStore). A: The restful_authentication plugin has a good implementation of this: http://agilewebdevelopment.com/plugins/restful_authentication A: Note that you don't want to persist their session, just their identity. You'll create a fresh session for them when they return to your site. Generally you just assign a GUID to the user, write that to their cookie, then use it to look them up when they come back. Don't use their login name or user ID for the token as it could easily be guessed and allow crafty visitors to hijack other users' accounts. A: This worked like a charm for me: http://squarewheel.wordpress.com/2007/11/03/session-cookie-expiration-time-in-rails/ Now my CookieStore sessions expire after two weeks, whereby the user must submit their login credentials again in order to be persistently logged-in for another two weeks. Bascially, it's as simple as: * *including one file in vendor/plugins directory *set session expiry value in application controller using just one line A: I would go for Devise for a brilliant authentication solution for rails. A: You should almost certainly not be extending the session cookie to be long lived. Although not dealing specifically with rails this article goes to some length to explain 'remember me' best practices. In summary though you should: * *Add an extra column to the user table to accept a large random value *Set a long lived cookie on the client which combines the user id and the random value *When a new session starts, check for the existence of the id/value cookie and authenticate the new user if they match. The author also recommends invalidating the random value and resetting the cookie at every login. Personally I don't like that as you then can't stay logged into a site on two computers. I would tend to make sure my password changing function also reset the random value thus locking out sessions on other machines. As a final note, the advice he gives on making certain functions (password change/email change etc) unavailable to auto authenticated sessions is well worth following but rarely seen in the real world. A: I have spent a while thinking about this and came to some conclusions. Rails session cookies are tamper-proof by default, so you really don't have to worry about a cookie being modified on the client end. Here is what I've done: * *Session cookie is set to be long-lived (6 months or so) *Inside the session store * *An 'expires on' date that is set to login + 24 hours *user id *Authenticated = true so I can allow for anonymous user sesssions (not dangerous because of the cookie tamper protection) *I add a before_filter in the Application Controller that checks the 'expires on' part of the session. When the user checks the "Remember Me" box, I just set the session[:expireson] date to be login + 2 weeks. No one can steal the cookie and stay logged in forever or masquerade as another user because the rails session cookie is tamper-proof. A: I would suggest that you either take a look at the RESTful_Authentication plug in, which has an implementation of this, or just switch your implementation to use the RESTful Authentication_plugin. There is a good explanation about how to use this plug in at Railscasts: railscasts #67 restful_authentication Here is a link to the plugin itself restful_authentication
{ "language": "en", "url": "https://stackoverflow.com/questions/438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: How can I find the full path to a font from its display name on a Mac? I am using the Photoshop's javascript API to find the fonts in a given PSD. Given a font name returned by the API, I want to find the actual physical font file that font name corresponds to on the disc. This is all happening in a python program running on OSX so I guess I'm looking for one of: * *Some Photoshop javascript *A Python function *An OSX API that I can call from python A: open up a terminal (Applications->Utilities->Terminal) and type this in: locate InsertFontHere This will spit out every file that has the name you want. Warning: there may be alot to wade through. A: I haven't been able to find anything that does this directly. I think you'll have to iterate through the various font folders on the system: /System/Library/Fonts, /Library/Fonts, and there can probably be a user-level directory as well ~/Library/Fonts. A: There must be a method in Cocoa to get a list of fonts, then you would have to use the PyObjC bindings to call it.. Depending on what you need them for, you could probably just use something like the following.. import os def get_font_list(): fonts = [] for font_path in ["/Library/Fonts", os.path.expanduser("~/Library/Fonts")]: if os.path.isdir(font_path): fonts.extend( [os.path.join(font_path, cur_font) for cur_font in os.listdir(font_path) ] ) return fonts A: Unfortunately the only API that isn't deprecated is located in the ApplicationServices framework, which doesn't have a bridge support file, and thus isn't available in the bridge. If you're wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef. Cocoa doesn't have any native support, at least as of 10.5, for getting the location of a font. A: I had encountered similar requirements and I ended up by this method: def get_font_path(font): ttf_filename = os.path.basename(font) dirs = [] if sys.platform == "win32": # check the windows font repository # NOTE: must use uppercase WINDIR, to work around bugs in # 1.5.2's os.environ.get() windir = os.environ.get("WINDIR") if windir: dirs.append(os.path.join(windir, "fonts")) elif sys.platform in ("linux", "linux2"): lindirs = os.environ.get("XDG_DATA_DIRS", "") if not lindirs: # According to the freedesktop spec, XDG_DATA_DIRS should # default to /usr/share lindirs = "/usr/share" dirs += [ os.path.join(lindir, "fonts") for lindir in lindirs.split(":") ] elif sys.platform == "darwin": dirs += [ "/Library/Fonts", "/System/Library/Fonts", os.path.expanduser("~/Library/Fonts"), ] ext = os.path.splitext(ttf_filename)[1] first_font_with_a_different_extension = None for directory in dirs: for walkroot, walkdir, walkfilenames in os.walk(directory): for walkfilename in walkfilenames: if ext and walkfilename == ttf_filename: return os.path.join(walkroot, walkfilename) elif ( not ext and os.path.splitext(walkfilename)[0] == ttf_filename ): fontpath = os.path.join(walkroot, walkfilename) if os.path.splitext(fontpath)[1] == ".ttf": return fontpath if ( not ext and first_font_with_a_different_extension is None ): first_font_with_a_different_extension = fontpath if first_font_with_a_different_extension: return first_font_with_a_different_extension Note that the original code is from PIL A: With matplotlib (pip3 install -U matplotlib): from matplotlib import font_manager fontmap = {font.name: font for font in font_manager.fontManager.ttflist} fontmap.update({font.name: font for font in font_manager.fontManager.afmlist}) print(f'Total fonts: {len(fontmap.keys())}') for family in sorted(fontmap.keys()): font = fontmap[family] print(f'{family:<30}: {font.fname}') Sample output Total fonts: 312 .Aqua Kana : /System/Library/Fonts/AquaKana.ttc Academy Engraved LET : /System/Library/Fonts/Supplemental/Academy Engraved LET Fonts.ttf Al Bayan : /System/Library/Fonts/Supplemental/AlBayan.ttc American Typewriter : /System/Library/Fonts/Supplemental/AmericanTypewriter.ttc ... Zapf Dingbats : /System/Library/Fonts/ZapfDingbats.ttf ZapfDingbats : /usr/local/lib/python3.9/site-packages/matplotlib/mpl-data/fonts/pdfcorefonts/ZapfDingbats.afm Zapfino : /System/Library/Fonts/Supplemental/Zapfino.ttf NOTE: The font families from matplotlib do not seem to include all system fonts that are available for example to the PyQt5 library: from PyQt5.QtGui import QFontDatabase from PyQt5.QtWidgets import QApplication app = QApplication([]) print('\n'.join(QFontDatabase().families()))
{ "language": "en", "url": "https://stackoverflow.com/questions/469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Homegrown consumption of web services I've been writing a few web services for a .net app, now I'm ready to consume them. I've seen numerous examples where there is homegrown code for consuming the service as opposed to using the auto generated methods that Visual Studio creates when adding the web reference. Is there some advantages to this? A: No, what you're doing is fine. Don't let those people confuse you. If you've written the web services with .net then the reference proxies generated by .net are going to be quite suitable. The situation you describe (where you are both producer and consumer) is the ideal situation. If you need to connect to a web services that is unknown at compile time, then you would want a more dynamic approach, where you deduce the 'shape' of the web service. But start by using the auto generated proxy class, and don't worry about it until you hit a limitation. And when you do -- come back to stack overflow ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: WinForms ComboBox data binding gotcha Assume you are doing something like the following List<string> myitems = new List<string> { "Item 1", "Item 2", "Item 3" }; ComboBox box = new ComboBox(); box.DataSource = myitems; ComboBox box2 = new ComboBox(); box2.DataSource = myitems So now we have 2 combo boxes bound to that array, and everything works fine. But when you change the value of one combo box, it changes BOTH combo boxes to the one you just selected. Now, I know that Arrays are always passed by reference (learned that when i learned C :D), but why on earth would the combo boxes change together? I don't believe the combo box control is modifying the collection at all. As a workaround, don't this would achieve the functionality that is expected/desired ComboBox box = new ComboBox(); box.DataSource = myitems.ToArray(); A: This has to do with how data bindings are set up in the dotnet framework, especially the BindingContext. On a high level it means that if you haven't specified otherwise each form and all the controls of the form share the same BindingContext. When you are setting the DataSource property the ComboBox will use the BindingContext to get a ConcurrenyMangager that wraps the list. The ConcurrenyManager keeps track of such things as the current selected position in the list. When you set the DataSource of the second ComboBox it will use the same BindingContext (the forms) which will yield a reference to the same ConcurrencyManager as above used to set up the data bindings. To get a more detailed explanation see BindingContext. A: A better workaround (depending on the size of the datasource) is to declare two BindingSource objects (new as of 2.00) bind the collection to those and then bind those to the comboboxes. I enclose a complete example. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication2 { public partial class Form1 : Form { private BindingSource source1 = new BindingSource(); private BindingSource source2 = new BindingSource(); public Form1() { InitializeComponent(); Load += new EventHandler(Form1Load); } void Form1Load(object sender, EventArgs e) { List<string> myitems = new List<string> { "Item 1", "Item 2", "Item 3" }; ComboBox box = new ComboBox(); box.Bounds = new Rectangle(10, 10, 100, 50); source1.DataSource = myitems; box.DataSource = source1; ComboBox box2 = new ComboBox(); box2.Bounds = new Rectangle(10, 80, 100, 50); source2.DataSource = myitems; box2.DataSource = source2; Controls.Add(box); Controls.Add(box2); } } } If you want to confuse yourself even more then try always declaring bindings in the constructor. That can result in some really curious bugs, hence I always bind in the Load event.
{ "language": "en", "url": "https://stackoverflow.com/questions/482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Get a preview JPEG of a PDF on Windows? I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF. On the Mac I am spawning sips. Is there something similarly simple I can do on Windows? A: Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python. A: ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output): gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \ -dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \ -sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \ -sOutputFile=$OUTPUT -f$INPUT where $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.) This is good for two reasons: * *You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions. *ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step. Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m. A: You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html : Convert taxes.pdf taxes.jpg Will convert a two page PDF file into [2] jpeg files: taxes.jpg.0, taxes.jpg.1 I can also convert these JPEGS to a thumbnail as follows: convert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg I can even convert the PDF directly to a jpeg thumbnail as follows: convert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg This will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two pages.
{ "language": "en", "url": "https://stackoverflow.com/questions/502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: Frequent SystemExit in Ruby when making HTTP calls I have a Ruby on Rails Website that makes HTTP calls to an external Web Service. About once a day I get a SystemExit (stacktrace below) error email where a call to the service has failed. If I then try the exact same query on my site moments later it works fine. It's been happening since the site went live and I've had no luck tracking down what causes it. Ruby is version 1.8.6 and rails is version 1.2.6. Anyone else have this problem? This is the error and stacktrace. A SystemExit occurred /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit' /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit_now_handler' /usr/local/lib/ruby/gems/1.8/gems/activesupport-1.4.4/lib/active_support/inflector.rb:250:in to_proc' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in call' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in rbuf_fill' /usr/local/lib/ruby/1.8/timeout.rb:56:in timeout' /usr/local/lib/ruby/1.8/timeout.rb:76:in timeout' /usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill' /usr/local/lib/ruby/1.8/net/protocol.rb:116:in readuntil' /usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline' /usr/local/lib/ruby/1.8/net/http.rb:2017:in read_status_line' /usr/local/lib/ruby/1.8/net/http.rb:2006:in read_new' /usr/local/lib/ruby/1.8/net/http.rb:1047:in request' /usr/local/lib/ruby/1.8/net/http.rb:945:in request_get' /usr/local/lib/ruby/1.8/net/http.rb:380:in get_response' /usr/local/lib/ruby/1.8/net/http.rb:543:in start' /usr/local/lib/ruby/1.8/net/http.rb:379:in get_response' A: Using fcgi with Ruby is known to be very buggy. Practically everybody has moved to Mongrel for this reason, and I recommend you do the same. A: It's been awhile since I used FCGI but I think a FCGI process could throw a SystemExit if the thread was taking too long. This could be the web service not responding or even a slow DNS query. Some google results show a similar error with Python and FCGI so moving to mongrel would be a good idea. This post is my reference I used to setup mongrel and I still refer back to it. A: I used to get these all the time on Apache1/fastcgi. I think it's caused by fastcgi hanging up before Ruby is done. Switching to mongrel is a good first step, but there's more to do. It's a bad idea to cull from web services on live pages, particularly from Rails. Rails is not thread-safe. The number of concurrent connections you can support equals the number of mongrels (or Passenger processes) in your cluster. If you have one mongrel and someone accesses a page that calls a web service that takes 10 seconds to time out, every request to your website will timeout during that time. Most of the load balancers just cycle through your mongrels blindly, so if you have two mongrels, every other request will timeout. Anything that can be unpredictably slow needs to happen in a job queue. The first hit to /slow/action adds the job to the queue, and /slow/action keeps on refreshing via page refreshes or queries via ajax until the job is finished, and then you get your results from the job queue. There are a few job queues for Rails nowadays, but the oldest and probably most widely used one is BackgroundRB. Another alternative, depending on the nature of your app, is to cull the service every N minutes via cron, cache the data locally, and have your live page read from the cache. A: I would also take a look at Passenger. It's a lot easier to get going than the traditional solution of Apache/nginx + Mongrel.
{ "language": "en", "url": "https://stackoverflow.com/questions/514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Continuous Integration System for a Python Codebase I am starting to work on a hobby project with a Python codebase and I would like to set up some form of continuous integration (i.e. running a battery of test-cases each time a check-in is made and sending nag e-mails to responsible persons when the tests fail) similar to CruiseControl or TeamCity. I realize I could do this with hooks in most VCSes, but that requires that the tests run on the same machine as the version control server, which isn't as elegant as I would like. Does anyone have any suggestions for a small, user-friendly, open-source continuous integration system suitable for a Python codebase? A: We are using Bitten wich is integrated with trac. And it's python based. A: TeamCity has some Python integration. But TeamCity is: * *not open-source *is not small, but rather feature rich *is free for small-mid teams. A: I have very good experiences with Travis-CI for smaller code bases. The main advantages are: * *setup is done in less than half a screen of config file *you can do your own installation or just use the free hosted version *semi-automatic setup for github repositories *no account needed on website; login via github Some limitations: * *Python is not supported as a first class language (as of time of writing; but you can use pip and apt-get to install python dependencies; see this tutorial) *code has to be hosted on github (at least when using the official version) A: We run Buildbot - Trac at work. I haven't used it too much since my codebase isn't part of the release cycle yet. But we run the tests on different environments (OSX/Linux/Win) and it sends emails — and it's written in Python. A: One possibility is Hudson. It's written in Java, but there's integration with Python projects: Hudson embraces Python I've never tried it myself, however. (Update, Sept. 2011: After a trademark dispute Hudson has been renamed to Jenkins.) A: Second the Buildbot - Trac integration. You can find more information about the integration on the Buildbot website. At my previous job, we wrote and used the plugin they mention (tracbb). What the plugin does is rewriting all of the Buildbot urls so you can use Buildbot from within Trac. (http://example.com/tracbb). The really nice thing about Buildbot is that the configuration is written in Python. You can integrate your own Python code directly to the configuration. It's also very easy to write your own BuildSteps to execute specific tasks. We used BuildSteps to get the source from SVN, pull the dependencies, publish test results to WebDAV, etcetera. I wrote an X10 interface so we could send signals with build results. When the build failed, we switched on a red lava lamp. When the build succeeded, a green lava lamp switched on. Good times :-) A: We use both Buildbot and Hudson for Jython development. Both are useful, but have different strengths and weaknesses. Buildbot's configuration is pure Python and quite simple once you get the hang of it (look at the epydoc-generated API docs for the most current info). Buildbot makes it easier to define non-testing tasks and distribute the testers. However, it really has no concept of individual tests, just textual, HTML, and summary output, so if you want to have multi-level browsable test output and so forth you'll have to build it yourself, or just use Hudson. Hudson has terrific support for drilling down from overall results into test suites and individual tests; it also is great for comparing test output between builds, but the distributed (master/slave) stuff is comparatively more complicated because you need a Java environment on the slaves too; also, Hudson is less tolerant of flaky network links between the master and slaves. So, to get the benefits of both tools, we run a single instance of Hudson, which catches the common test failures, then we do multi-platform regression with Buildbot. Here are our instances: * *Jython Hudson *Jython buildbot
{ "language": "en", "url": "https://stackoverflow.com/questions/535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: The definitive guide to form-based website authentication Moderator note: This question is not a good fit for our question and answer format with the topicality rules which currently apply for Stack Overflow. We normally use a "historical lock" for such questions where the content still has value. However, the answers on this question are actively maintained and a historical lock doesn't permit editing of the answers. As such, a "wiki answer" lock has been applied to allow the answers to be edited. You should assume the topicality issues which are normally handled by a historical lock are present (i.e. this question not a good example of an on-topic question for Stack Overflow). Form-based authentication for websites We believe that Stack Overflow should not just be a resource for very specific technical questions, but also for general guidelines on how to solve variations on common problems. "Form based authentication for websites" should be a fine topic for such an experiment. It should include topics such as: * *How to log in *How to log out *How to remain logged in *Managing cookies (including recommended settings) *SSL/HTTPS encryption *How to store passwords *Using secret questions *Forgotten username/password functionality *Use of nonces to prevent cross-site request forgeries (CSRF) *OpenID *"Remember me" checkbox *Browser autocompletion of usernames and passwords *Secret URLs (public URL protected by digest) *Checking password strength *E-mail validation *and much more about form based authentication... It should not include things like: * *Roles and authorization *HTTP basic authentication Please help us by: * *Suggesting subtopics *Submitting good articles about this subject *Editing the official answer A: Use OpenID Connect or User-Managed Access. As nothing is more efficient than not doing it at all. A: I do not think the above answer is "wrong" but there are large areas of authentication that are not touched upon (or rather the emphasis is on "how to implement cookie sessions", not on "what options are available and what are the trade-offs". My suggested edits/answers are * *The problem lies more in account setup than in password checking. *The use of two-factor authentication is much more secure than more clever means of password encryption *Do NOT try to implement your own login form or database storage of passwords, unless the data being stored is valueless at account creation and self-generated (that is, web 2.0 style like Facebook, Flickr, etc.) * *Digest Authentication is a standards-based approach supported in all major browsers and servers, that will not send a password even over a secure channel. This avoids any need to have "sessions" or cookies as the browser itself will re-encrypt the communication each time. It is the most "lightweight" development approach. However, I do not recommend this, except for public, low-value services. This is an issue with some of the other answers above - do not try an re-implement server-side authentication mechanisms - this problem has been solved and is supported by most major browsers. Do not use cookies. Do not store anything in your own hand-rolled database. Just ask, per request, if the request is authenticated. Everything else should be supported by configuration and third-party trusted software. So ... First, we are confusing the initial creation of an account (with a password) with the re-checking of the password subsequently. If I am Flickr and creating your site for the first time, the new user has access to zero value (blank web space). I truly do not care if the person creating the account is lying about their name. If I am creating an account of the hospital intranet/extranet, the value lies in all the medical records, and so I do care about the identity (*) of the account creator. This is the very very hard part. The only decent solution is a web of trust. For example, you join the hospital as a doctor. You create a web page hosted somewhere with your photo, your passport number, and a public key, and hash them all with the private key. You then visit the hospital and the system administrator looks at your passport, sees if the photo matches you, and then hashes the web page/photo hash with the hospital private key. From now on we can securely exchange keys and tokens. As can anyone who trusts the hospital (there is the secret sauce BTW). The system administrator can also give you an RSA dongle or other two-factor authentication. But this is a lot of a hassle, and not very web 2.0. However, it is the only secure way to create new accounts that have access to valuable information that is not self-created. * *Kerberos and SPNEGO - single sign-on mechanisms with a trusted third party - basically the user verifies against a trusted third party. (NB this is not in any way the not to be trusted OAuth) *SRP - sort of clever password authentication without a trusted third party. But here we are getting into the realms of "it's safer to use two-factor authentication, even if that's costlier" *SSL client side - give the clients a public key certificate (support in all major browsers - but raises questions over client machine security). In the end, it's a tradeoff - what is the cost of a security breach vs the cost of implementing more secure approaches. One day, we may see a proper PKI widely accepted and so no more own rolled authentication forms and databases. One day... A: When hashing, don't use fast hash algorithms such as MD5 (many hardware implementations exist). Use something like SHA-512. For passwords, slower hashes are better. The faster you can create hashes, the faster any brute force checker can work. Slower hashes will therefore slow down brute forcing. A slow hash algorithm will make brute forcing impractical for longer passwords (8 digits +) A: My favourite rule in regards to authentication systems: use passphrases, not passwords. Easy to remember, hard to crack. More info: Coding Horror: Passwords vs. Pass Phrases A: Definitive Article Sending credentials The only practical way to send credentials 100% securely is by using SSL. Using JavaScript to hash the password is not safe. Common pitfalls for client-side password hashing: * *If the connection between the client and server is unencrypted, everything you do is vulnerable to man-in-the-middle attacks. An attacker could replace the incoming javascript to break the hashing or send all credentials to their server, they could listen to client responses and impersonate the users perfectly, etc. etc. SSL with trusted Certificate Authorities is designed to prevent MitM attacks. *The hashed password received by the server is less secure if you don't do additional, redundant work on the server. There's another secure method called SRP, but it's patented (although it is freely licensed) and there are few good implementations available. Storing passwords Don't ever store passwords as plaintext in the database. Not even if you don't care about the security of your own site. Assume that some of your users will reuse the password of their online bank account. So, store the hashed password, and throw away the original. And make sure the password doesn't show up in access logs or application logs. OWASP recommends the use of Argon2 as your first choice for new applications. If this is not available, PBKDF2 or scrypt should be used instead. And finally if none of the above are available, use bcrypt. Hashes by themselves are also insecure. For instance, identical passwords mean identical hashes--this makes hash lookup tables an effective way of cracking lots of passwords at once. Instead, store the salted hash. A salt is a string appended to the password prior to hashing - use a different (random) salt per user. The salt is a public value, so you can store them with the hash in the database. See here for more on this. This means that you can't send the user their forgotten passwords (because you only have the hash). Don't reset the user's password unless you have authenticated the user (users must prove that they are able to read emails sent to the stored (and validated) email address.) Security questions Security questions are insecure - avoid using them. Why? Anything a security question does, a password does better. Read PART III: Using Secret Questions in @Jens Roland answer here in this wiki. Session cookies After the user logs in, the server sends the user a session cookie. The server can retrieve the username or id from the cookie, but nobody else can generate such a cookie (TODO explain mechanisms). Cookies can be hijacked: they are only as secure as the rest of the client's machine and other communications. They can be read from disk, sniffed in network traffic, lifted by a cross-site scripting attack, phished from a poisoned DNS so the client sends their cookies to the wrong servers. Don't send persistent cookies. Cookies should expire at the end of the client session (browser close or leaving your domain). If you want to autologin your users, you can set a persistent cookie, but it should be distinct from a full-session cookie. You can set an additional flag that the user has auto-logged in, and needs to log in for real for sensitive operations. This is popular with shopping sites that want to provide you with a seamless, personalized shopping experience but still protect your financial details. For example, when you return to visit Amazon, they show you a page that looks like you're logged in, but when you go to place an order (or change your shipping address, credit card etc.), they ask you to confirm your password. Financial websites such as banks and credit cards, on the other hand, only have sensitive data and should not allow auto-login or a low-security mode. List of external resources * *Dos and Don'ts of Client Authentication on the Web (PDF) 21 page academic article with many great tips. *Ask YC: Best Practices for User Authentication Forum discussion on the subject *You're Probably Storing Passwords Incorrectly Introductory article about storing passwords *Discussion: Coding Horror: You're Probably Storing Passwords Incorrectly Forum discussion about a Coding Horror article. *Never store passwords in a database! Another warning about storing passwords in the database. *Password cracking Wikipedia article on weaknesses of several password hashing schemes. *Enough With The Rainbow Tables: What You Need To Know About Secure Password Schemes Discussion about rainbow tables and how to defend against them, and against other threads. Includes extensive discussion. A: PART I: How To Log In We'll assume you already know how to build a login+password HTML form which POSTs the values to a script on the server side for authentication. The sections below will deal with patterns for sound practical auth, and how to avoid the most common security pitfalls. To HTTPS or not to HTTPS? Unless the connection is already secure (that is, tunneled through HTTPS using SSL/TLS), your login form values will be sent in cleartext, which allows anyone eavesdropping on the line between browser and web server will be able to read logins as they pass through. This type of wiretapping is done routinely by governments, but in general, we won't address 'owned' wires other than to say this: Just use HTTPS. In essence, the only practical way to protect against wiretapping/packet sniffing during login is by using HTTPS or another certificate-based encryption scheme (for example, TLS) or a proven & tested challenge-response scheme (for example, the Diffie-Hellman-based SRP). Any other method can be easily circumvented by an eavesdropping attacker. Of course, if you are willing to get a little bit impractical, you could also employ some form of two-factor authentication scheme (e.g. the Google Authenticator app, a physical 'cold war style' codebook, or an RSA key generator dongle). If applied correctly, this could work even with an unsecured connection, but it's hard to imagine that a dev would be willing to implement two-factor auth but not SSL. (Do not) Roll-your-own JavaScript encryption/hashing Given the perceived (though now avoidable) cost and technical difficulty of setting up an SSL certificate on your website, some developers are tempted to roll their own in-browser hashing or encryption schemes in order to avoid passing cleartext logins over an unsecured wire. While this is a noble thought, it is essentially useless (and can be a security flaw) unless it is combined with one of the above - that is, either securing the line with strong encryption or using a tried-and-tested challenge-response mechanism (if you don't know what that is, just know that it is one of the most difficult to prove, most difficult to design, and most difficult to implement concepts in digital security). While it is true that hashing the password can be effective against password disclosure, it is vulnerable to replay attacks, Man-In-The-Middle attacks / hijackings (if an attacker can inject a few bytes into your unsecured HTML page before it reaches your browser, they can simply comment out the hashing in the JavaScript), or brute-force attacks (since you are handing the attacker both username, salt and hashed password). CAPTCHAS against humanity CAPTCHA is meant to thwart one specific category of attack: automated dictionary/brute force trial-and-error with no human operator. There is no doubt that this is a real threat, however, there are ways of dealing with it seamlessly that don't require a CAPTCHA, specifically properly designed server-side login throttling schemes - we'll discuss those later. Know that CAPTCHA implementations are not created alike; they often aren't human-solvable, most of them are actually ineffective against bots, all of them are ineffective against cheap third-world labor (according to OWASP, the current sweatshop rate is $12 per 500 tests), and some implementations may be technically illegal in some countries (see OWASP Authentication Cheat Sheet). If you must use a CAPTCHA, use Google's reCAPTCHA, since it is OCR-hard by definition (since it uses already OCR-misclassified book scans) and tries very hard to be user-friendly. Personally, I tend to find CAPTCHAS annoying, and use them only as a last resort when a user has failed to log in a number of times and throttling delays are maxed out. This will happen rarely enough to be acceptable, and it strengthens the system as a whole. Storing Passwords / Verifying logins This may finally be common knowledge after all the highly-publicized hacks and user data leaks we've seen in recent years, but it has to be said: Do not store passwords in cleartext in your database. User databases are routinely hacked, leaked or gleaned through SQL injection, and if you are storing raw, plaintext passwords, that is instant game over for your login security. So if you can't store the password, how do you check that the login+password combination POSTed from the login form is correct? The answer is hashing using a key derivation function. Whenever a new user is created or a password is changed, you take the password and run it through a KDF, such as Argon2, bcrypt, scrypt or PBKDF2, turning the cleartext password ("correcthorsebatterystaple") into a long, random-looking string, which is a lot safer to store in your database. To verify a login, you run the same hash function on the entered password, this time passing in the salt and compare the resulting hash string to the value stored in your database. Argon2, bcrypt and scrypt store the salt with the hash already. Check out this article on sec.stackexchange for more detailed information. The reason a salt is used is that hashing in itself is not sufficient -- you'll want to add a so-called 'salt' to protect the hash against rainbow tables. A salt effectively prevents two passwords that exactly match from being stored as the same hash value, preventing the whole database being scanned in one run if an attacker is executing a password guessing attack. A cryptographic hash should not be used for password storage because user-selected passwords are not strong enough (i.e. do not usually contain enough entropy) and a password guessing attack could be completed in a relatively short time by an attacker with access to the hashes. This is why KDFs are used - these effectively "stretch the key", which means that every password guess an attacker makes causes multiple repetitions of the hash algorithm, for example 10,000 times, which causes the attacker to guess the password 10,000 times slower. Session data - "You are logged in as Spiderman69" Once the server has verified the login and password against your user database and found a match, the system needs a way to remember that the browser has been authenticated. This fact should only ever be stored server side in the session data. If you are unfamiliar with session data, here's how it works: A single randomly-generated string is stored in an expiring cookie and used to reference a collection of data - the session data - which is stored on the server. If you are using an MVC framework, this is undoubtedly handled already. If at all possible, make sure the session cookie has the secure and HTTP Only flags set when sent to the browser. The HttpOnly flag provides some protection against the cookie being read through XSS attack. The secure flag ensures that the cookie is only sent back via HTTPS, and therefore protects against network sniffing attacks. The value of the cookie should not be predictable. Where a cookie referencing a non-existent session is presented, its value should be replaced immediately to prevent session fixation. Session state can also be maintained on the client side. This is achieved by using techniques like JWT (JSON Web Token). PART II: How To Remain Logged In - The Infamous "Remember Me" Checkbox Persistent Login Cookies ("remember me" functionality) are a danger zone; on the one hand, they are entirely as safe as conventional logins when users understand how to handle them; and on the other hand, they are an enormous security risk in the hands of careless users, who may use them on public computers and forget to log out, and who may not know what browser cookies are or how to delete them. Personally, I like persistent logins for the websites I visit on a regular basis, but I know how to handle them safely. If you are positive that your users know the same, you can use persistent logins with a clean conscience. If not - well, then you may subscribe to the philosophy that users who are careless with their login credentials brought it upon themselves if they get hacked. It's not like we go to our user's houses and tear off all those facepalm-inducing Post-It notes with passwords they have lined up on the edge of their monitors, either. Of course, some systems can't afford to have any accounts hacked; for such systems, there is no way you can justify having persistent logins. If you DO decide to implement persistent login cookies, this is how you do it: * *First, take some time to read Paragon Initiative's article on the subject. You'll need to get a bunch of elements right, and the article does a great job of explaining each. *And just to reiterate one of the most common pitfalls, DO NOT STORE THE PERSISTENT LOGIN COOKIE (TOKEN) IN YOUR DATABASE, ONLY A HASH OF IT! The login token is Password Equivalent, so if an attacker got their hands on your database, they could use the tokens to log in to any account, just as if they were cleartext login-password combinations. Therefore, use hashing (according to https://security.stackexchange.com/a/63438/5002 a weak hash will do just fine for this purpose) when storing persistent login tokens. PART III: Using Secret Questions Don't implement 'secret questions'. The 'secret questions' feature is a security anti-pattern. Read the paper from link number 4 from the MUST-READ list. You can ask Sarah Palin about that one, after her Yahoo! email account got hacked during a previous presidential campaign because the answer to her security question was... "Wasilla High School"! Even with user-specified questions, it is highly likely that most users will choose either: * *A 'standard' secret question like mother's maiden name or favorite pet *A simple piece of trivia that anyone could lift from their blog, LinkedIn profile, or similar *Any question that is easier to answer than guessing their password. Which, for any decent password, is every question you can imagine In conclusion, security questions are inherently insecure in virtually all their forms and variations, and should not be employed in an authentication scheme for any reason. The true reason why security questions even exist in the wild is that they conveniently save the cost of a few support calls from users who can't access their email to get to a reactivation code. This at the expense of security and Sarah Palin's reputation. Worth it? Probably not. PART IV: Forgotten Password Functionality I already mentioned why you should never use security questions for handling forgotten/lost user passwords; it also goes without saying that you should never e-mail users their actual passwords. There are at least two more all-too-common pitfalls to avoid in this field: * *Don't reset a forgotten password to an autogenerated strong password - such passwords are notoriously hard to remember, which means the user must either change it or write it down - say, on a bright yellow Post-It on the edge of their monitor. Instead of setting a new password, just let users pick a new one right away - which is what they want to do anyway. (An exception to this might be if the users are universally using a password manager to store/manage passwords that would normally be impossible to remember without writing it down). *Always hash the lost password code/token in the database. AGAIN, this code is another example of a Password Equivalent, so it MUST be hashed in case an attacker got their hands on your database. When a lost password code is requested, send the plaintext code to the user's email address, then hash it, save the hash in your database -- and throw away the original. Just like a password or a persistent login token. A final note: always make sure your interface for entering the 'lost password code' is at least as secure as your login form itself, or an attacker will simply use this to gain access instead. Making sure you generate very long 'lost password codes' (for example, 16 case-sensitive alphanumeric characters) is a good start, but consider adding the same throttling scheme that you do for the login form itself. PART V: Checking Password Strength First, you'll want to read this small article for a reality check: The 500 most common passwords Okay, so maybe the list isn't the canonical list of most common passwords on any system anywhere ever, but it's a good indication of how poorly people will choose their passwords when there is no enforced policy in place. Plus, the list looks frighteningly close to home when you compare it to publicly available analyses of recently stolen passwords. So: With no minimum password strength requirements, 2% of users use one of the top 20 most common passwords. Meaning: if an attacker gets just 20 attempts, 1 in 50 accounts on your website will be crackable. Thwarting this requires calculating the entropy of a password and then applying a threshold. The National Institute of Standards and Technology (NIST) Special Publication 800-63 has a set of very good suggestions. That, when combined with a dictionary and keyboard layout analysis (for example, 'qwertyuiop' is a bad password), can reject 99% of all poorly selected passwords at a level of 18 bits of entropy. Simply calculating password strength and showing a visual strength meter to a user is good, but insufficient. Unless it is enforced, a lot of users will most likely ignore it. And for a refreshing take on user-friendliness of high-entropy passwords, Randall Munroe's Password Strength xkcd is highly recommended. Utilize Troy Hunt's Have I Been Pwned API to check users passwords against passwords compromised in public data breaches. PART VI: Much More - Or: Preventing Rapid-Fire Login Attempts First, have a look at the numbers: Password Recovery Speeds - How long will your password stand up If you don't have the time to look through the tables in that link, here's the list of them: * *It takes virtually no time to crack a weak password, even if you're cracking it with an abacus *It takes virtually no time to crack an alphanumeric 9-character password if it is case insensitive *It takes virtually no time to crack an intricate, symbols-and-letters-and-numbers, upper-and-lowercase password if it is less than 8 characters long (a desktop PC can search the entire keyspace up to 7 characters in a matter of days or even hours) *It would, however, take an inordinate amount of time to crack even a 6-character password, if you were limited to one attempt per second! So what can we learn from these numbers? Well, lots, but we can focus on the most important part: the fact that preventing large numbers of rapid-fire successive login attempts (ie. the brute force attack) really isn't that difficult. But preventing it right isn't as easy as it seems. Generally speaking, you have three choices that are all effective against brute-force attacks (and dictionary attacks, but since you are already employing a strong passwords policy, they shouldn't be an issue): * *Present a CAPTCHA after N failed attempts (annoying as hell and often ineffective -- but I'm repeating myself here) *Locking accounts and requiring email verification after N failed attempts (this is a DoS attack waiting to happen) *And finally, login throttling: that is, setting a time delay between attempts after N failed attempts (yes, DoS attacks are still possible, but at least they are far less likely and a lot more complicated to pull off). Best practice #1: A short time delay that increases with the number of failed attempts, like: * *1 failed attempt = no delay *2 failed attempts = 2 sec delay *3 failed attempts = 4 sec delay *4 failed attempts = 8 sec delay *5 failed attempts = 16 sec delay *etc. DoS attacking this scheme would be very impractical, since the resulting lockout time is slightly larger than the sum of the previous lockout times. To clarify: The delay is not a delay before returning the response to the browser. It is more like a timeout or refractory period during which login attempts to a specific account or from a specific IP address will not be accepted or evaluated at all. That is, correct credentials will not return in a successful login, and incorrect credentials will not trigger a delay increase. Best practice #2: A medium length time delay that goes into effect after N failed attempts, like: * *1-4 failed attempts = no delay *5 failed attempts = 15-30 min delay DoS attacking this scheme would be quite impractical, but certainly doable. Also, it might be relevant to note that such a long delay can be very annoying for a legitimate user. Forgetful users will dislike you. Best practice #3: Combining the two approaches - either a fixed, short time delay that goes into effect after N failed attempts, like: * *1-4 failed attempts = no delay *5+ failed attempts = 20 sec delay Or, an increasing delay with a fixed upper bound, like: * *1 failed attempt = 5 sec delay *2 failed attempts = 15 sec delay *3+ failed attempts = 45 sec delay This final scheme was taken from the OWASP best-practices suggestions (link 1 from the MUST-READ list) and should be considered best practice, even if it is admittedly on the restrictive side. As a rule of thumb, however, I would say: the stronger your password policy is, the less you have to bug users with delays. If you require strong (case-sensitive alphanumerics + required numbers and symbols) 9+ character passwords, you could give the users 2-4 non-delayed password attempts before activating the throttling. DoS attacking this final login throttling scheme would be very impractical. And as a final touch, always allow persistent (cookie) logins (and/or a CAPTCHA-verified login form) to pass through, so legitimate users won't even be delayed while the attack is in progress. That way, the very impractical DoS attack becomes an extremely impractical attack. Additionally, it makes sense to do more aggressive throttling on admin accounts, since those are the most attractive entry points PART VII: Distributed Brute Force Attacks Just as an aside, more advanced attackers will try to circumvent login throttling by 'spreading their activities': * *Distributing the attempts on a botnet to prevent IP address flagging *Rather than picking one user and trying the 50.000 most common passwords (which they can't, because of our throttling), they will pick THE most common password and try it against 50.000 users instead. That way, not only do they get around maximum-attempts measures like CAPTCHAs and login throttling, their chance of success increases as well, since the number 1 most common password is far more likely than number 49.995 *Spacing the login requests for each user account, say, 30 seconds apart, to sneak under the radar Here, the best practice would be logging the number of failed logins, system-wide, and using a running average of your site's bad-login frequency as the basis for an upper limit that you then impose on all users. Too abstract? Let me rephrase: Say your site has had an average of 120 bad logins per day over the past 3 months. Using that (running average), your system might set the global limit to 3 times that -- ie. 360 failed attempts over a 24 hour period. Then, if the total number of failed attempts across all accounts exceeds that number within one day (or even better, monitor the rate of acceleration and trigger on a calculated threshold), it activates system-wide login throttling - meaning short delays for ALL users (still, with the exception of cookie logins and/or backup CAPTCHA logins). I also posted a question with more details and a really good discussion of how to avoid tricky pitfals in fending off distributed brute force attacks PART VIII: Two-Factor Authentication and Authentication Providers Credentials can be compromised, whether by exploits, passwords being written down and lost, laptops with keys being stolen, or users entering logins into phishing sites. Logins can be further protected with two-factor authentication, which uses out-of-band factors such as single-use codes received from a phone call, SMS message, app, or dongle. Several providers offer two-factor authentication services. Authentication can be completely delegated to a single-sign-on service, where another provider handles collecting credentials. This pushes the problem to a trusted third party. Google and Twitter both provide standards-based SSO services, while Facebook provides a similar proprietary solution. MUST-READ LINKS About Web Authentication * *OWASP Guide To Authentication / OWASP Authentication Cheat Sheet *Dos and Don’ts of Client Authentication on the Web (very readable MIT research paper) *Wikipedia: HTTP cookie *Personal knowledge questions for fallback authentication: Security questions in the era of Facebook (very readable Berkeley research paper) A: I'd like to add one suggestion I've used, based on defense in depth. You don't need to have the same auth&auth system for admins as regular users. You can have a separate login form on a separate url executing separate code for requests that will grant high privileges. This one can make choices that would be a total pain to regular users. One such that I've used is to actually scramble the login URL for admin access and email the admin the new URL. Stops any brute force attack right away as your new URL can be arbitrarily difficult (very long random string) but your admin user's only inconvenience is following a link in their email. The attacker no longer knows where to even POST to. A: I dont't know whether it was best to answer this as an answer or as a comment. I opted for the first option. Regarding the poing PART IV: Forgotten Password Functionality in the first answer, I would make a point about Timing Attacks. In the Remember your password forms, an attacker could potentially check a full list of emails and detect which are registered to the system (see link below). Regarding the Forgotten Password Form, I would add that it is a good idea to equal times between successful and unsucessful queries with some delay function. https://crypto.stanford.edu/~dabo/papers/webtiming.pdf A: I would like to add one very important comment: - * *"In a corporate, intra- net setting," most if not all of the foregoing might not apply! Many corporations deploy "internal use only" websites which are, effectively, "corporate applications" that happen to have been implemented through URLs. These URLs can (supposedly ...) only be resolved within "the company's internal network." (Which network magically includes all VPN-connected 'road warriors.') When a user is dutifully-connected to the aforesaid network, their identity ("authentication") is [already ...] "conclusively known," as is their permission ("authorization") to do certain things ... such as ... "to access this website." This "authentication + authorization" service can be provided by several different technologies, such as LDAP (Microsoft OpenDirectory), or Kerberos. From your point-of-view, you simply know this: that anyone who legitimately winds-up at your website must be accompanied by [an environment-variable magically containing ...] a "token." (i.e. The absence of such a token must be immediate grounds for 404 Not Found.) The token's value makes no sense to you, but, should the need arise, "appropriate means exist" by which your website can "[authoritatively] ask someone who knows (LDAP... etc.)" about any and every(!) question that you may have. In other words, you do not avail yourself of any "home-grown logic." Instead, you inquire of The Authority and implicitly trust its verdict. Uh huh ... it's quite a mental-switch from the "wild-and-wooly Internet." A: First, a strong caveat that this answer is not the best fit for this exact question. It should definitely not be the top answer! I will go ahead and mention Mozilla’s proposed BrowserID (or perhaps more precisely, the Verified Email Protocol) in the spirit of finding an upgrade path to better approaches to authentication in the future. I’ll summarize it this way: * *Mozilla is a nonprofit with values that align well with finding good solutions to this problem. *The reality today is that most websites use form-based authentication *Form-based authentication has a big drawback, which is an increased risk of phishing. Users are asked to enter sensitive information into an area controlled by a remote entity, rather than an area controlled by their User Agent (browser). *Since browsers are implicitly trusted (the whole idea of a User Agent is to act on behalf of the User), they can help improve this situation. *The primary force holding back progress here is deployment deadlock. Solutions must be decomposed into steps which provide some incremental benefit on their own. *The simplest decentralized method for expressing an identity that is built into the internet infrastructure is the domain name. *As a second level of expressing identity, each domain manages its own set of accounts. *The form “account@domain” is concise and supported by a wide range of protocols and URI schemes. Such an identifier is, of course, most universally recognized as an email address. *Email providers are already the de-facto primary identity providers online. Current password reset flows usually let you take control of an account if you can prove that you control that account’s associated email address. *The Verified Email Protocol was proposed to provide a secure method, based on public key cryptography, for streamlining the process of proving to domain B that you have an account on domain A. *For browsers that don’t support the Verified Email Protocol (currently all of them), Mozilla provides a shim which implements the protocol in client-side JavaScript code. *For email services that don’t support the Verified Email Protocol, the protocol allows third parties to act as a trusted intermediary, asserting that they’ve verified a user’s ownership of an account. It is not desirable to have a large number of such third parties; this capability is intended only to allow an upgrade path, and it is much preferred that email services provide these assertions themselves. *Mozilla offers their own service to act like such a trusted third party. Service Providers (that is, Relying Parties) implementing the Verified Email Protocol may choose to trust Mozilla's assertions or not. Mozilla’s service verifies users’ account ownership using the conventional means of sending an email with a confirmation link. *Service Providers may, of course, offer this protocol as an option in addition to any other method(s) of authentication they might wish to offer. *A big user interface benefit being sought here is the “identity selector”. When a user visits a site and chooses to authenticate, their browser shows them a selection of email addresses (“personal”, “work”, “political activism”, etc.) they may use to identify themselves to the site. *Another big user interface benefit being sought as part of this effort is helping the browser know more about the user’s session – who they’re signed in as currently, primarily – so it may display that in the browser chrome. *Because of the distributed nature of this system, it avoids lock-in to major sites like Facebook, Twitter, Google, etc. Any individual can own their own domain and therefore act as their own identity provider. This is not strictly “form-based authentication for websites”. But it is an effort to transition from the current norm of form-based authentication to something more secure: browser-supported authentication. A: I just thought I'd share this solution that I found to be working just fine. I call it the Dummy Field (though I haven't invented this so don't credit me). Others know this as a honey pot. In short: you just have to insert this into your <form> and check for it to be empty at when validating: <input type="text" name="email" style="display:none" /> The trick is to fool a bot into thinking it has to insert data into a required field, that's why I named the input "email". If you already have a field called email that you're using you should try naming the dummy field something else like "company", "phone" or "emailaddress". Just pick something you know you don't need and what sounds like something people would normally find logical to fill in into a web form. Now hide the input field using CSS or JavaScript/jQuery - whatever fits you best - just don't set the input type to hidden or else the bot won't fall for it. When you are validating the form (either client or server side) check if your dummy field has been filled to determine if it was sent by a human or a bot. Example: In case of a human: The user will not see the dummy field (in my case named "email") and will not attempt to fill it. So the value of the dummy field should still be empty when the form has been sent. In case of a bot: The bot will see a field whose type is text and a name email (or whatever it is you called it) and will logically attempt to fill it with appropriate data. It doesn't care if you styled the input form with some fancy CSS, web-developers do it all the time. Whatever the value in the dummy field is, we don't care as long as it's larger than 0 characters. I used this method on a guestbook in combination with CAPTCHA, and I haven't seen a single spam post since. I had used a CAPTCHA-only solution before, but eventually, it resulted in about five spam posts every hour. Adding the dummy field in the form has stopped (at least until now) all the spam from appearing. I believe this can also be used just fine with a login/authentication form. Warning: Of course this method is not 100% foolproof. Bots can be programmed to ignore input fields with the style display:none applied to it. You also have to think about people who use some form of auto-completion (like most browsers have built-in!) to auto-fill all form fields for them. They might just as well pick up a dummy field. You can also vary this up a little by leaving the dummy field visible but outside the boundaries of the screen, but this is totally up to you. Be creative!
{ "language": "en", "url": "https://stackoverflow.com/questions/549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5518" }
Q: How to use combinations of sets as test data I would like to test a function with a tuple from a set of fringe cases and normal values. For example, while testing a function which returns true whenever given three lengths that form a valid triangle, I would have specific cases, negative / small / large numbers, values close-to being overflowed, etc.; what is more, main aim is to generate combinations of these values, with or without repetition, in order to get a set of test data. (inf,0,-1), (5,10,1000), (10,5,5), (0,-1,5), (1000,inf,inf), ... As a note: I actually know the answer to this, but it might be helpful for others, and a challenge for people here! --will post my answer later on. A: With the brand new Python 2.6, you have a standard solution with the itertools module that returns the Cartesian product of iterables : import itertools print list(itertools.product([1,2,3], [4,5,6])) [(1, 4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6)] You can provide a "repeat" argument to perform the product with an iterable and itself: print list(itertools.product([1,2], repeat=3)) [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)] You can also tweak something with combinations as well : print list(itertools.combinations('123', 2)) [('1', '2'), ('1', '3'), ('2', '3')] And if order matters, there are permutations : print list(itertools.permutations([1,2,3,4], 2)) [(1, 2), (1, 3), (1, 4), (2, 1), (2, 3), (2, 4), (3, 1), (3, 2), (3, 4), (4, 1), (4, 2), (4, 3)] Of course all that cool stuff don't exactly do the same thing, but you can use them in a way or another to solve you problem. Just remember that you can convert a tuple or a list to a set and vice versa using list(), tuple() and set(). A: Interesting question! I would do this by picking combinations, something like the following in python. The hardest part is probably first pass verification, i.e. if f(1,2,3) returns true, is that a correct result? Once you have verified that, then this is a good basis for regression testing. Probably it's a good idea to make a set of test cases that you know will be all true (e.g. 3,4,5 for this triangle case), and a set of test cases that you know will be all false (e.g. 0,1,inf). Then you can more easily verify the tests are correct. # xpermutations from http://code.activestate.com/recipes/190465 from xpermutations import * lengths=[-1,0,1,5,10,0,1000,'inf'] for c in xselections(lengths,3): # or xuniqueselections print c (-1,-1,-1); (-1,-1,0); (-1,-1,1); (-1,-1,5); (-1,-1,10); (-1,-1,0); (-1,-1,1000); (-1,-1,inf); (-1,0,-1); (-1,0,0); ... A: I think you can do this with the Row Test Attribute (available in MbUnit and later versions of NUnit) where you could specify several sets to populate one unit test. A: Absolutely, especially dealing with lots of these permutations/combinations I can definitely see that the first pass would be an issue. Interesting implementation in python, though I wrote a nice one in C and Ocaml based on "Algorithm 515" (see below). He wrote his in Fortran as it was common back then for all the "Algorithm XX" papers, well, that assembly or c. I had to re-write it and make some small improvements to work with arrays not ranges of numbers. This one does random access, I'm still working on getting some nice implementations of the ones mentioned in Knuth 4th volume fascicle 2. I'll an explanation of how this works to the reader. Though if someone is curious, I wouldn't object to writing something up. /** [combination c n p x] * get the [x]th lexicographically ordered set of [p] elements in [n] * output is in [c], and should be sizeof(int)*[p] */ void combination(int* c,int n,int p, int x){ int i,r,k = 0; for(i=0;i<p-1;i++){ c[i] = (i != 0) ? c[i-1] : 0; do { c[i]++; r = choose(n-c[i],p-(i+1)); k = k + r; } while(k < x); k = k - r; } c[p-1] = c[p-2] + x - k; } ~"Algorithm 515: Generation of a Vector from the Lexicographical Index"; Buckles, B. P., and Lybanon, M. ACM Transactions on Mathematical Software, Vol. 3, No. 2, June 1977. A: While it's possible to create lots of test data and see what happens, it's more efficient to try to minimize the data being used. From a typical QA perspective, you would want to identify different classifications of inputs. Produce a set of input values for each classification and determine the appropriate outputs. Here's a sample of classes of input values * *valid triangles with small numbers such as (1 billion, 2, billion, 2 billion) *valid triangles with large numbers such as (0.000001, 0.00002, 0.00003) *valid obtuse triangles that are 'almost'flat such as (10, 10, 19.9999) *valid acute triangles that are 'almost' flat such as (10, 10, 0000001) *invalid triangles with at least one negative value *invalid triangles where the sum of two sides equals the third *invalid triangles where the sum of two sides is greater than the third *input values that are non-numeric ... Once you are satisfied with the list of input classifications for this function, then you can create the actual test data. Likely, it would be helpful to test all permutations of each item. (e.g. (2,3,4), (2,4,3), (3,2,4), (3,4,2), (4,2,3), (4,3,2)) Typically, you'll find there are some classifications you missed (such as the concept of inf as an input parameter). Random data for some period of time may be helpful as well, that can find strange bugs in the code, but is generally not productive. More likely, this function is being used in some specific context where additional rules are applied.(e.g. only integer values or values must be in 0.01 increments, etc.) These add to the list of classifications of input parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How to write to Web.Config in Medium Trust? Uploading my first decently sized web app to my shared host provided me with a fresh set of challenges, by which I mean, sleepless nights. The issue was that I had most certainly not developed my application for medium trust (or had any clue what that was.) I mitigated all of the issues, save one. I had written an installer for the admin to be able to specify their connection string and other preferences, but I cannot find a way to write to a web.config in medium trust. Does anyone have a solution, or should I just be putting preferences in another file? A: That actually sounds like IIS's Low level. If it is, then you won't be able to write to any file, not just the web.config. Here are the levels from IIS's help file: * * Full (internal) - Specifies unrestricted permissions. Grants the ASP.NET application permissions to access any resource that is subject to operating system security. All privileged operations are supported. *High (web_hightrust.config) - Specifies a high level of code access security, which means that the application cannot do any one of the following things by default: * *Call unmanaged code. *Call serviced components. *Write to the event log. *Access Message Queuing service queues. *Access ODBC, OleDb, or Oracle data sources. *Medium (web_mediumtrust.config) - Specifies a medium level of code access security, which means that, in addition to High Trust Level restrictions, the ASP.NET application cannot do any of the following things by default: * *Access files outside the application directory. *Access the registry. *Make network or Web service calls. *Low (web_lowtrust.config) - Specifies a low level of code access security, which means that, in addition to Medium Trust Level restrictions, the application cannot do any of the following things by default: * *Write to the file system. *Call the Assert method. *Minimal (web_minimaltrust.config) - Specifies a minimal level of code access security, which means that the application has only execute permissions. I would suggest that if you are dead set on having an installer, have it create a web.config in memory that the user can save locally and FTP up afterward.
{ "language": "en", "url": "https://stackoverflow.com/questions/562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: What is the difference between an int and an Integer in Java and C#? I was reading More Joel on Software when I came across Joel Spolsky saying something about a particular type of programmer knowing the difference between an int and an Integer in Java/C# (Object-Oriented Programming Languages). So, what is the difference? A: This has already been answered for Java, here's the C# answer: "Integer" is not a valid type name in C# and "int" is just an alias for System.Int32. Also, unlike in Java (or C++) there aren't any special primitive types in C#, every instance of a type in C# (including int) is an object. Here's some demonstrative code: void DoStuff() { System.Console.WriteLine( SomeMethod((int)5) ); System.Console.WriteLine( GetTypeName<int>() ); } string SomeMethod(object someParameter) { return string.Format("Some text {0}", someParameter.ToString()); } string GetTypeName<T>() { return (typeof (T)).FullName; } A: One more thing that I don't see in previous answers: In Java the primitive wrappers classes like Integer, Double, Float, Boolean... and String are suposed to be invariant, so that when you pass an instance of those classes the invoked method couldn't alter your data in any way, in opositión with most of other classes, which internal data could be altered by its public methods. So that this classes only has 'getter' methods, no 'setters', besides the constructor. In a java program String literals are stored in a separate portion of heap memory, only a instance for literal, to save memory reusing those instances A: In platforms like Java, ints are primitives while Integer is an object which holds a integer field. The important distinction is that primitives are always passed around by value and by definition are immutable. Any operation involving a primitive variable always returns a new value. On the other hand, objects are passed around by reference. One could argue that the point to the object (AKA the reference) is also being passed around by value, but the contents are not. A: int is used to declare primitive variable e.g. int i=10; Integer is used to create reference variable of class Integer Integer a = new Integer(); A: have you ever programmed before then (int) is one of the primitive types you can set for your variables (just like char, float, ...). but Integer is a wrapper class that you can use it to do some functions on an int variable (e.g convert it to string or vise versa,...) , but keep note that methods in the wrapper classes are static so you can use them anytime without creating an instance of Integer class. as a recap : int x; Integer y; x and y are both variables of type int but y is wrapped by an Integer class and has several methods that you use,but i case you need to call some functions of Integer wrapper class you can do it simply. Integer.toString(x); but be aware that both x and y are corect but if you want to use them just as a primitive type, use the simple form (used for defining x). A: Java: int, double, long, byte, float, double, short, boolean, char - primitives. Used for hold the basic data types supported by the language. the primitive types are not part of the object hierarchy, and they do not inherit Object. Thet can'be pass by reference to a method. Double, Float, Long, Integer, Short, Byte, Character, and Boolean, are type Wrappers, packaged in java.lang. All of the numeric type wrappers define constructors that allow an object to be constructed from a given value, or a string representation of that value. Using objects can add an overhead to even the simplest of calculations. Beginning with JDK 5, Java has included two very helpful features: autoboxing and autounboxing. Autoboxing/unboxing greatly simplifies and streamlines code that must convert primitive types into objects, and vice versa. Example of constructors: Integer(int num) Integer(String str) throws NumberFormatException Double(double num) Double(String str) throws NumberFormatException Example of boxing/unboxing: class ManualBoxing { public static void main(String args[]) { Integer objInt = new Integer(20); // Manually box the value 20. int i = objInt.intValue(); // Manually unbox the value 20 System.out.println(i + " " + iOb); // displays 20 20 } } Example of autoboxing/autounboxing: class AutoBoxing { public static void main(String args[]) { Integer objInt = 40; // autobox an int int i = objInt ; // auto-unbox System.out.println(i + " " + iOb); // displays 40 40 } } P.S. Herbert Schildt's book was taken as a reference. A: I'll add to the excellent answers given above, and talk about boxing and unboxing, and how this applies to Java (although C# has it too). I'll use just Java terminology because I am more au fait with that. As the answers mentioned, int is just a number (called the unboxed type), whereas Integer is an object (which contains the number, hence a boxed type). In Java terms, that means (apart from not being able to call methods on int), you cannot store int or other non-object types in collections (List, Map, etc.). In order to store them, you must first box them up in its corresponding boxed type. Java 5 onwards have something called auto-boxing and auto-unboxing which allow the boxing/unboxing to be done behind the scenes. Compare and contrast: Java 5 version: Deque<Integer> queue; void add(int n) { queue.add(n); } int remove() { return queue.remove(); } Java 1.4 or earlier (no generics either): Deque queue; void add(int n) { queue.add(Integer.valueOf(n)); } int remove() { return ((Integer) queue.remove()).intValue(); } It must be noted that despite the brevity in the Java 5 version, both versions generate identical bytecode. Thus, although auto-boxing and auto-unboxing are very convenient because you write less code, these operations do happens behind the scenes, with the same runtime costs, so you still have to be aware of their existence. Hope this helps! A: In both languages (Java and C#) int is 4-byte signed integer. Unlike Java, C# Provides both signed and unsigned integer values. As Java and C# are object object-oriented, some operations in these languages do not map directly onto instructions provided by the run time and so needs to be defined as part of an object of some type. C# provides System.Int32 which is a value type using a part of memory that belongs to the reference type on the heap. java provides java.lang.Integer which is a reference type operating on int. The methods in Integer can't be compiled directly to run time instructions.So we box an int value to convert it into an instance of Integer and use the methods which expects instance of some type (like toString(), parseInt(), valueOf() etc). In C# variable int refers to System.Int32.Any 4-byte value in memory can be interpreted as a primitive int, that can be manipulated by instance of System.Int32.So int is an alias for System.Int32.When using integer-related methods like int.Parse(), int.ToString() etc. Integer is compiled into the FCL System.Int32 struct calling the respective methods like Int32.Parse(), Int32.ToString(). A: An int and Integer in Java and C# are two different terms used to represent different things. It is one of the the primitive data types that can be assigned to a variable that can store exactly. One value of its declared type at a time. For example: int number = 7; Where int is the datatype assigned to the variable number which holds the value seven. So an int is just a primitive not an object. While an Integer is a wrapper class for a primitive data type which has static methods. That can be used as an argument to a method which requires an object, where as int can be used as an argument to a method which requires an integer value, that can be used for arithmetic expression. For example: Integer number = new Integer(5); A: An int variable holds a 32 bit signed integer value. An Integer (with capital I) holds a reference to an object of (class) type Integer, or to null. Java automatically casts between the two; from Integer to int whenever the Integer object occurs as an argument to an int operator or is assigned to an int variable, or an int value is assigned to an Integer variable. This casting is called boxing/unboxing. If an Integer variable referencing null is unboxed, explicitly or implicitly, a NullPointerException is thrown. A: In Java, the int type is a primitive data type, where as the Integer type is an object. In C#, the int type is also a data type same as System.Int32. An integer (just like any other value types) can be boxed ("wrapped") into an object. A: In Java int is a primitive data type while Integer is a Helper class, it is use to convert for one data type to other. For example: double doubleValue = 156.5d; Double doubleObject = new Double(doubleValue); Byte myByteValue = doubleObject.byteValue (); String myStringValue = doubleObject.toString(); Primitive data types are store the fastest available memory where the Helper class is complex and store in heep memory. reference from "David Gassner" Java Essential Training. A: I'll just post here since some of the other posts are slightly inaccurate in relation to C#. Correct: int is an alias for System.Int32. Wrong: float is not an alias for System.Float, but for System.Single Basically, int is a reserved keyword in the C# programming language, and is an alias for the System.Int32 value type. float and Float is not the same however, as the right system type for ''float'' is System.Single. There are some types like this that has reserved keywords that doesn't seem to match the type names directly. In C# there is no difference between ''int'' and ''System.Int32'', or any of the other pairs or keywords/system types, except for when defining enums. With enums you can specify the storage size to use and in this case you can only use the reserved keyword, and not the system runtime type name. Wether the value in the int will be stored on the stack, in memory, or as a referenced heap object depends on the context and how you use it. This declaration in a method: int i; defines a variable i of type System.Int32, living in a register or on the stack, depending on optimizations. The same declaration in a type (struct or class) defines a member field. The same declaration in a method argument list defines a parameter, with the same storage options as for a local variable. (note that this paragraph is not valid if you start pulling iterator methods into the mix, these are different beasts altogether) To get a heap object, you can use boxing: object o = i; this will create a boxed copy of the contents of i on the heap. In IL you can access methods on the heap object directly, but in C# you need to cast it back to an int, which will create another copy. Thus, the object on the heap cannot easily be changed in C# without creating a new boxed copy of a new int value. (Ugh, this paragraph doesn't read all that easily.) A: In Java, the 'int' type is a primitive, whereas the 'Integer' type is an object. In C#, the 'int' type is the same as System.Int32 and is a value type (ie more like the java 'int'). An integer (just like any other value types) can be boxed ("wrapped") into an object. The differences between objects and primitives are somewhat beyond the scope of this question, but to summarize: Objects provide facilities for polymorphism, are passed by reference (or more accurately have references passed by value), and are allocated from the heap. Conversely, primitives are immutable types that are passed by value and are often allocated from the stack. A: Regarding Java 1.5 and autoboxing there is an important "quirk" that comes to play when comparing Integer objects. In Java, Integer objects with the values -128 to 127 are immutable (that is, for one particular integer value, say 23, all Integer objects instantiated through your program with the value 23 points to the exact same object). Example, this returns true: Integer i1 = new Integer(127); Integer i2 = new Integer(127); System.out.println(i1 == i2); // true While this returns false: Integer i1 = new Integer(128); Integer i2 = new Integer(128); System.out.println(i1 == i2); // false The == compares by reference (does the variables point to the same object). This result may or may not differ depending on what JVM you are using. The specification autoboxing for Java 1.5 requires that integers (-128 to 127) always box to the same wrapper object. A solution? =) One should always use the Integer.equals() method when comparing Integer objects. System.out.println(i1.equals(i2)); // true More info at java.net Example at bexhuff.com A: In Java there are two basic types in the JVM. 1) Primitive types and 2) Reference Types. int is a primitive type and Integer is a class type (which is kind of reference type). Primitive values do not share state with other primitive values. A variable whose type is a primitive type always holds a primitive value of that type. int aNumber = 4; int anotherNum = aNumber; aNumber += 6; System.out.println(anotherNum); // Prints 4 An object is a dynamically created class instance or an array. The reference values (often just references) are pointers to these objects and a special null reference, which refers to no object. There may be many references to the same object. Integer aNumber = Integer.valueOf(4); Integer anotherNumber = aNumber; // anotherNumber references the // same object as aNumber Also in Java everything is passed by value. With objects the value that is passed is the reference to the object. So another difference between int and Integer in java is how they are passed in method calls. For example in public int add(int a, int b) { return a + b; } final int two = 2; int sum = add(1, two); The variable two is passed as the primitive integer type 2. Whereas in public int add(Integer a, Integer b) { return a.intValue() + b.intValue(); } final Integer two = Integer.valueOf(2); int sum = add(Integer.valueOf(1), two); The variable two is passed as a reference to an object that holds the integer value 2. @WolfmanDragon: Pass by reference would work like so: public void increment(int x) { x = x + 1; } int a = 1; increment(a); // a is now 2 When increment is called it passes a reference (pointer) to variable a. And the increment function directly modifies variable a. And for object types it would work as follows: public void increment(Integer x) { x = Integer.valueOf(x.intValue() + 1); } Integer a = Integer.valueOf(1); increment(a); // a is now 2 Do you see the difference now? A: "int" is primitive data-type and "Integer" in Wrapper Class in Java. "Integer" can be used as an argument to a method which requires an object, where as "int" can be used as an argument to a method which requires an integer value, that can be used for arithmetic expression. A: Well, in Java an int is a primitive while an Integer is an Object. Meaning, if you made a new Integer: Integer i = new Integer(6); You could call some method on i: String s = i.toString();//sets s the string representation of i Whereas with an int: int i = 6; You cannot call any methods on it, because it is simply a primitive. So: String s = i.toString();//will not work!!! would produce an error, because int is not an object. int is one of the few primitives in Java (along with char and some others). I'm not 100% sure, but I'm thinking that the Integer object more or less just has an int property and a whole bunch of methods to interact with that property (like the toString() method for example). So Integer is a fancy way to work with an int (Just as perhaps String is a fancy way to work with a group of chars). I know that Java isn't C, but since I've never programmed in C this is the closest I could come to the answer. Integer object javadoc Integer Ojbect vs. int primitive comparison A: In C#, int is just an alias for System.Int32, string for System.String, double for System.Double etc... Personally I prefer int, string, double, etc. because they don't require a using System; statement :) A silly reason, I know... A: There are many reasons to use wrapper classes: * *We get extra behavior (for instance we can use methods) *We can store null values whereas in primitives we cannot *Collections support storing objects and not primitives. A: int is predefined in library function c# but in java we can create object of Integer A: 01. Integer can be null. But int cannot be null. Integer value1 = null; //OK int value2 = null //Error 02. Only can pass Wrapper Classes type values to any collection class. (Wrapper Classes - Boolean,Character,Byte,Short,Integer,Long,Float,Double) List<Integer> element = new ArrayList<>(); int valueInt = 10; Integer valueInteger = new Integer(value); element.add(valueInteger); But normally we add primitive values to collection class? Is point 02 correct? List<Integer> element = new ArrayList<>(); element.add(5); Yes 02 is correct, beacouse autoboxing. Autoboxing is the automatic conversion that the java compiler makes between the primitive type and their corresponding wrapper class. Then 5 convert as Integer value by autoboxing. A: In java as per my knowledge if you learner then, when you write int a; then in java generic it will compile code like Integer a = new Integer(). So,as per generics Integer is not used but int is used. so there is so such difference there. A: (Java Version) In Simple words int is primitive (cannot have null value) and Integer is wrapper object for int. One example where to use Integer vs int, When you want to compare and int variable again null it will throw error. int a; //assuming a value you are getting from data base which is null if(a ==null) // this is wrong - cannot compare primitive to null { do something...} Instead you will use, Integer a; //assuming a value you are getting from data base which is null if(a ==null) // this is correct/legal { do something...} A: int is a primitive data type. Integer is a wrapper class. It can store int data as objects. A: int is a primitive datatype whereas Integer is an object. Creating an object with Integer will give you access to all the methods that are available in the Integer class. But, if you create a primitive data type with int, you will not be able to use those inbuild methods and you have to define them by yourself. But, if you don't want any other methods and want to make the program more memory efficient, you can go with primitive datatype because creating an object will increase the memory consumption.
{ "language": "en", "url": "https://stackoverflow.com/questions/564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "269" }
Q: Deploying SQL Server Databases from Test to Live I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005. Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count. Would you just copy the .mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time). Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count). And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change. Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep. So, what are you using to automatically deploy SQL Server Databases from Test to Live? A: Don't forget Microsoft's solution to the problem: Visual Studio 2008 Database Edition. Includes tools for deploying changes to databases, producing a diff between databases for schema and/or data changes, unit tests, test data generation. It's pretty expensive but I used the trial edition for a while and thought it was brilliant. It makes the database as easy to work with as any other piece of code. A: Like Rob Allen, I use SQL Compare / Data Compare by Redgate. I also use the Database publishing wizard by Microsoft. I also have a console app I wrote in C# that takes a sql script and runs it on a server. This way you can run large scripts with 'GO' commands in it from a command line or in a batch script. I use Microsoft.SqlServer.BatchParser.dll and Microsoft.SqlServer.ConnectionInfo.dll libraries in the console application. A: I work the same way Karl does, by keeping all of my SQL scripts for creating and altering tables in a text file that I keep in source control. In fact, to avoid the problem of having to have a script examine the live database to determine what ALTERs to run, I usually work like this: * *On the first version, I place everything during testing into one SQL script, and treat all tables as a CREATE. This means I end up dropping and readding tables a lot during testing, but that's not a big deal early into the project (since I'm usually hacking the data I'm using at that point anyway). *On all subsequent versions, I do two things: I make a new text file to hold the upgrade SQL scripts, that contain just the ALTERs for that version. And I make the changes to the original, create a fresh database script as well. This way an upgrade just runs the upgrade script, but if we have to recreate the DB we don't need to run 100 scripts to get there. *Depending on how I'm deploying the DB changes, I'll also usually put a version table in the DB that holds the version of the DB. Then, rather than make any human decisions about which scripts to run, whatever code I have running the create/upgrade scripts uses the version to determine what to run. The one thing this will not do is help if part of what you're moving from test to production is data, but if you want to manage structure and not pay for a nice, but expensive DB management package, is really not very difficult. I've also found it's a pretty good way of keeping mental track of your DB. A: If you have a company buying it, Toad from Quest Software has this kind of management functionality built in. It's basically a two-click operation to compare two schemas and generate a sync script from one to the other. They have editions for most of the popular databases, including of course Sql Server. A: I agree that scripting everything is the best way to go and is what I advocate at work. You should script everything from DB and object creation to populating your lookup tables. Anything you do in UI only won't translate (especially for changes... not so much for first deployments) and will end up requiring a tools like what Redgate offers. A: Using SMO/DMO, it isn't too difficult to generate a script of your schema. Data is a little more fun, but still doable. In general, I take "Script It" approach, but you might want to consider something along these lines: * *Distinguish between Development and Staging, such that you can Develop with a subset of data ... this I would create a tool to simply pull down some production data, or generate fake data where security is concerned. *For team development, each change to the database will have to be coordinated amongst your team members. Schema and data changes can be intermingled, but a single script should enable a given feature. Once all your features are ready, you bundle these up in a single SQL file and run that against a restore of production. *Once your staging has cleared acceptance, you run the single SQL file again on the production machine. I have used the Red Gate tools and they are great tools, but if you can't afford it, building the tools and working this way isn't too far from the ideal. A: I'm using Subsonic's migrations mechanism so I just have a dll with classes in squential order that have 2 methods, up and down. There is a continuous integration/build script hook into nant, so that I can automate the upgrading of my database. Its not the best thign in the world, but it beats writing DDL. A: RedGate SqlCompare is a way to go in my opinion. We do DB deployment on a regular basis and since I started using that tool I have never looked back. Very intuitive interface and saves a lot of time in the end. The Pro version will take care of scripting for the source control integration as well. A: I also maintain scripts for all my objects and data. For deploying I wrote this free utility - http://www.sqldart.com. It'll let you reorder your script files and will run the whole lot within a transaction. A: I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my .sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same. Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it! A: For my projects I alternate between SQL Compare from REd Gate and the Database Publishing Wizard from Microsoft which you can download free here. The Wizard isn't as slick as SQL Compare or SQL Data Compare but it does the trick. One issue is that the scripts it generates may need some rearranging and/or editing to flow in one shot. On the up side, it can move your schema and data which isn't bad for a free tool. A: I agree with keeping everything in source control and manually scripting all changes. Changes to the schema for a single release go into a script file created specifically for that release. All stored procs, views, etc should go into individual files and treated just like .cs or .aspx as far as source control goes. I use a powershell script to generate one big .sql file for updating the programmability stuff. I don't like automating the application of schema changes, like new tables, new columns, etc. When doing a production release, I like to go through the change script command by command to make sure each one works as expected. There's nothing worse than running a big change script on production and getting errors because you forgot some little detail that didn't present itself in development. I have also learned that indexes need to be treated just like code files and put into source control. And you should definitely have more than 2 databases - dev and live. You should have a dev database that everybody uses for daily dev tasks. Then a staging database that mimics production and is used to do your integration testing. Then maybe a complete recent copy of production (restored from a full backup), if that is feasible, so your last round of installation testing goes against something that is as close to the real thing as possible. A: I do all my database creation as DDL and then wrap that DDL into a schema maintainence class. I may do various things to create the DDL in the first place but fundamentally I do all the schema maint in code. This also means that if one needs to do non DDL things that don't map well to SQL you can write procedural logic and run it between lumps of DDL/DML. My dbs then have a table which defines the current version so one can code a relatively straightforward set of tests: * *Does the DB exist? If not create it. *Is the DB the current version? If not then run the methods, in sequence, that bring the schema up to date (you may want to prompt the user to confirm and - ideally - do backups at this point). For a single user app I just run this in place, for a web app we currently to lock the user out if the versions don't match and have a stand alone schema maint app we run. For multi-user it will depend on the particular environment. The advantage? Well I have a very high level of confidence that the schema for the apps that use this methodology is consistent across all instances of those applications. Its not perfect, there are issues, but it works... There are some issues when developing in a team environment but that's more or less a given anyway! Murph A: I'm currently working the same thing to you. Not only deploying SQL Server databases from test to live but also include the whole process from Local -> Integration -> Test -> Production. So what can make me easily everyday is I do NAnt task with Red-Gate SQL Compare. I'm not working for RedGate but I have to say it is good choice.
{ "language": "en", "url": "https://stackoverflow.com/questions/580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Best way to access Exchange using PHP? I'm writing a CMS application in PHP and one of the requirements is that it must be able to interface with the customer's Exchange server. I've written up this functionality a few times before and have always used WebDAV to do it, but now I'm leaning away from that. I will be running the site on IIS OR Apache (no preference) on Windows server 2008. A few things I would need to do include adding contacts to a given user's address book, sending emails as a given user and running reports on contacts for a user. All of this is pretty easy to do with WebDAV, but if there is a better way that doesn't require any functionality that is likely to be deprecated any time soon. Any ideas? Update: Justin, I love the idea of using com objects, I just worry about maintaining a 3rd product to make everything work... John, I can write a web service in C# to interface with for these functions and access it with my PHP app, but it's also a little bit out of the way. So far, I'm not 100% convinced that either of these is better than WebDAV... Can anyone show me where I'm being silly? A: I'm not a PHP dev but Google says that PHP 5+ can instantiate COM components. If you can install Outlook on a box you could write a PHP web service around the COM component to handle the requests you need. $outlook = COM("Outlook.Application") Outlook API referance A: I would recommend using "PHP Exchange Web Services" or short php-ews. Fair amount of documentation under the wiki, helped me a lot. A: This Zarafa PHP MAPI extension looks like it could work. A: I would look into IMAP IMAP, POP3 and NNTP A: https://github.com/Garethp/php-ews It was last updated 3 months ago so it is maintained A: Update as of 2020: Over a decade since this question and things have moved on. Microsft now has a Rest API that will allow you to easily access this data. Original Answer I have not used PHP to do this but have experience in using C# to achieve the same thing. The Outlook API is a way of automating Outlook rather than connecting to Exchange directly. I have previously taken this approach in a C# application and it does work although can be buggy. If you wish to connect directly to the Exchange server you will need to research extended MAPI. In the past I used this wrapper MAPIEx: Extended MAPI Wrapper. It is a C# project but I believe you can use some .NET code on a PHP5 Windows server. Alternatively it has a C++ core DLL that you may be a able to use. I have found it to be very good and there are some good example applications. Sorry for the delay no current way to keep track of posts yet. I do agree adding more layer on to your application and relying on 3rd party code can be scary (and rightfully so.) Today I read another interesting post tagged up as MAPI that is on a different subject. The key thing here though is that it has linked to this important MS article. I have been unaware of the issues until now on using managed code to interface to MAPI although the C++ code in the component should be unaffected by this error as it is unmanaged. This blog entry also suggests other ways to connect to MAPI/Exchange server. In this case due to these new facts http://us3.php.net/imap may be the answer as suggested by the other user. A: Is your customer using Exchange 2007? If so, I'd have a look at Exchange Web Services. If not, as hairy as it can be, I think WebDAV is your best bet. Personally I don't like using the Outlook.Application COM object route, as its security prompts ("An application is attempting to access your contacts. Allow this?", etc.) can cause problems on a server. I also think it would be difficult to accomplish your impersonation-like tasks using Outlook, such as sending mail as a given user. A: I have released an open-source MIT licensed library that allows you to do some basic operations in PHP using Exchange Web Services. Exchange Web Services for PHP I have only tested it on Linux but I don't see any reason why it wouldn't work on a Windows installation of PHP as well. A: I can't recommend Dmitry Streblechenko's Redemption Data Objects library highly enough. It's a COM component that provides a sane API to Extended MAPI and is a joy to use. The Exchange API goalposts move from one release to the next: “Use the M: drive! No, use WebDAV! No, use ExOLEDB!… No, use Web Services!” with the only constant being good old MAPI.
{ "language": "en", "url": "https://stackoverflow.com/questions/588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: cx_Oracle: How do I iterate over a result set? There are several ways to iterate over a result set. What are the tradeoff of each? A: There's also the way psyco-pg seems to do it... From what I gather, it seems to create dictionary-like row-proxies to map key lookup into the memory block returned by the query. In that case, fetching the whole answer and working with a similar proxy-factory over the rows seems like useful idea. Come to think of it though, it feels more like Lua than Python. Also, this should be applicable to all PEP-249 DBAPI2.0 interfaces, not just Oracle, or did you mean just fastest using Oracle? A: The canonical way is to use the built-in cursor iterator. curs.execute('select * from people') for row in curs: print row You can use fetchall() to get all rows at once. for row in curs.fetchall(): print row It can be convenient to use this to create a Python list containing the values returned: curs.execute('select first_name from people') names = [row[0] for row in curs.fetchall()] This can be useful for smaller result sets, but can have bad side effects if the result set is large. * *You have to wait for the entire result set to be returned to your client process. *You may eat up a lot of memory in your client to hold the built-up list. *It may take a while for Python to construct and deconstruct the list which you are going to immediately discard anyways. If you know there's a single row being returned in the result set you can call fetchone() to get the single row. curs.execute('select max(x) from t') maxValue = curs.fetchone()[0] Finally, you can loop over the result set fetching one row at a time. In general, there's no particular advantage in doing this over using the iterator. row = curs.fetchone() while row: print row row = curs.fetchone() A: My preferred way is the cursor iterator, but setting first the arraysize property of the cursor. curs.execute('select * from people') curs.arraysize = 256 for row in curs: print row In this example, cx_Oracle will fetch rows from Oracle 256 rows at a time, reducing the number of network round trips that need to be performed
{ "language": "en", "url": "https://stackoverflow.com/questions/594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Robust Random Number Generation I'm looking for a performant, reasonably robust RNG using no special hardware. It can use mathematical methods (Mersenne Twister, etc), it can "collect entropy" from the machine, whatever. On Linux/etc we have a drand48() which generates 48 random bits. I'd like a similar function/class for C++ or C# which can generate more than 32 bits of randomness and which low-order bits are equally as random as high-order bits. It doesn't have to be cryptographically secure but it must not use or be based on the C-language rand() or .NET System.Random. Any source code, links to source, etc. would be appreciated! Failing that, what TYPE of RNG should I be looking for? A: The Gnu Scientific Library (GSL) has a pretty extensive set of RN generators, test harness, etc. If you're on linux, it's probably already available on your system. A: Watch out for the Gnu Scientific Library. It's licensed under the GPL rather than LGPL. As other folks mentioned, the Boost random classes are a good start. Their implementation conforms to the PRNG code slated for TR1: http://www.boost.org/doc/libs/1_35_0/libs/random/index.html http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1452.html If you have a recent version of the G++ compiler, you may find the TR1 libraries already included A: C++11 has adopted a robust random number library based on boost.random. You can access a number of random number engines using different algorithms to meet your quality, speed, or size requirements. Quality implementations will even provide access to whatever non-deterministic RNG your platform offers via std::random_device. In addition there are many adaptors to produce specific distributions, eliminating the need to do such manipulation by hand (something often done incorrectly). #include <random> A: For C++, Boost.Random is probably what you're looking for. It has support for MT (among many other algorithms), and can collect entropy via the nondet_random class. Check it out! :-) A: Boost.Random is my first choice for RNG http://www.boost.org/doc/libs/random
{ "language": "en", "url": "https://stackoverflow.com/questions/601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Build for Windows NT 4.0 using Visual Studio 2005? An MFC application that I'm trying to migrate uses afxext.h, which causes _AFXDLL to get set, which causes this error if I set /MT: Please use the /MD switch for _AFXDLL builds My research to date indicates that it is impossible to build an application for execution on Windows NT 4.0 using Visual Studio (C++, in this case) 2005. Is this really true? Are there any workaround available? A: No, there are many applications built with VS2005 that have to support Windows XP, 2000, NT, the whole stack. The issue is that (by default) VS2005 wants to use libraries/exports not present on NT. See this thread for some background. Then start limiting your dependencies via preprocessor macros, and avoiding APIs which aren't supported on NT. A: To get rid of the _AFXDLL error, have you tried changing to the settings to use MFC as a static lib instead of a DLL? This is similar to what you're already doing in changing the runtime libs to static instead of DLL. A: The workaround is to fix the multi-threaded DLL. Simple instructions. Short summary: The shipping 8.0 C Runtime Library DLL (MSVCR80.DLL) does not support NT 4.0 SP6 for one reason and one reason only: someone at Microsoft added a function call to GetLongPathNameW which does not exist in kernel32.dll on NT 4.0. CRTLIB.C On line 577, there is a call to GetLongPathNameW. simply replace it with: ret = 0; only use this build of MSVCR80.DLL on NT 4.0. Once you've got those working, coming up with a more generic solution should be trivial. A: Although I'm not familiar with afxext.h, I am wondering what about it makes it incompatible with Windows NT4.... However, to answer the original question: "My research to date indicates that it is impossible to build an application for execution on Windows NT 4.0 using Visual Studio (C++, in this case) 2005." The answer should be yes especially if the application was originally written or running on NT4! With the afxext.h thing aside, this should be an easy YES. The other thing I am finding trouble with is the loose nature in which people are throwing out the NT term. Granted most people think of 'NT' as Windows NT4 but it's still ambiguous because 'most people' is not equal to 'all people.' In reality the term 'NT' is equal to the NT series. The NT series is NT3, NT4, NT5 (2000, XP, 2003) and NT6 (Vista). Win32 is a subsystem which you target your C/C++ code too. So I see no reason why one should not be able target this NT4 platform & subsystem or, if this is a platform porting excercise, remove the MFC dependencies that VC is possibly imposing. Adding the afxext.h to the mix, it sounds to me like a subsystem compatibility issue. It's part of MFC from my Google research. The afxext.h appears to be the MFC (Microsoft Foundation Class) extensions. Can you remove your dependency on MFC? What type of application is this? (CLR, service, GUI interface?) Can you convert project to an unmanaged C++ project in VC 8.0? Hopefully some of this will help you along. A: The idea is that the exe is needed to link to the static library. Please try this "Configuration Properties", "General", "Use of MFC" to "Use MFC in a Static Library" "Configuration Properties", "General", "Use of ATL" to "Static Link to ATL" "Configuration Properties", "C\C++", "Code Generation", "Runtime Library" to "Multi-Threaded (\MT)" Test Platform Build Machine: Visual Studio 2005 on Window XP SP2 Client Machine: Window XP SP2 (no VS2005 installed)
{ "language": "en", "url": "https://stackoverflow.com/questions/609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Most efficient code for the first 10000 prime numbers? I want to print the first 10000 prime numbers. Can anyone give me the most efficient code for this? Clarifications: * *It does not matter if your code is inefficient for n >10000. *The size of the code does not matter. *You cannot just hard code the values in any manner. A: Using GMP, one could write the following: #include <stdio.h> #include <gmp.h> int main() { mpz_t prime; mpz_init(prime); mpz_set_ui(prime, 1); int i; char* num = malloc(4000); for(i=0; i<10000; i++) { mpz_nextprime(prime, prime); printf("%s, ", mpz_get_str(NULL,10,prime)); } } On my 2.33GHz Macbook Pro, it executes as follows: time ./a.out > /dev/null real 0m0.033s user 0m0.029s sys 0m0.003s Calculating 1,000,000 primes on the same laptop: time ./a.out > /dev/null real 0m14.824s user 0m14.606s sys 0m0.086s GMP is highly optimized for this sort of thing. Unless you really want to understand the algorithms by writing your own, you'd be advised to use libGMP under C. A: Not efficient at all, but you can use a regular expression to test for prime numbers. /^1?$|^(11+?)\1+$/ This tests if, for a string consisting of k “1”s, k is not prime (i.e. whether the string consists of one “1” or any number of “1”s that can be expressed as an n-ary product). A: The Sieve of Atkin is probably what you're looking for, its upper bound running time is O(N/log log N). If you only run the numbers 1 more and 1 less than the multiples of 6, it could be even faster, as all prime numbers above 3 are 1 away from some multiple of six. Resource for my statement A: I have adapted code found on the CodeProject to create the following: ArrayList primeNumbers = new ArrayList(); for(int i = 2; primeNumbers.Count < 10000; i++) { bool divisible = false; foreach(int number in primeNumbers) { if(i % number == 0) { divisible = true; } } if(divisible == false) { primeNumbers.Add(i); Console.Write(i + " "); } } Testing this on my ASP.NET Server took the rountine about 1 minute to run. A: Sieve of Eratosthenes is the way to go, because of it's simplicity and speed. My implementation in C #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <math.h> int main(void) { unsigned int lim, i, j; printf("Find primes upto: "); scanf("%d", &lim); lim += 1; bool *primes = calloc(lim, sizeof(bool)); unsigned int sqrtlim = sqrt(lim); for (i = 2; i <= sqrtlim; i++) if (!primes[i]) for (j = i * i; j < lim; j += i) primes[j] = true; printf("\nListing prime numbers between 2 and %d:\n\n", lim - 1); for (i = 2; i < lim; i++) if (!primes[i]) printf("%d\n", i); return 0; } CPU Time to find primes (on Pentium Dual Core E2140 1.6 GHz, using single core) ~ 4s for lim = 100,000,000 A: I recommend a sieve, either the Sieve of Eratosthenes or the Sieve of Atkin. The sieve or Eratosthenes is probably the most intuitive method of finding a list of primes. Basically you: * *Write down a list of numbers from 2 to whatever limit you want, let's say 1000. *Take the first number that isn't crossed off (for the first iteration this is 2) and cross off all multiples of that number from the list. *Repeat step 2 until you reach the end of the list. All the numbers that aren't crossed off are prime. Obviously there are quite a few optimizations that can be done to make this algorithm work faster, but this is the basic idea. The sieve of Atkin uses a similar approach, but unfortunately I don't know enough about it to explain it to you. But I do know that the algorithm I linked takes 8 seconds to figure out all the primes up to 1000000000 on an ancient Pentium II-350 Sieve of Eratosthenes Source Code: http://web.archive.org/web/20140705111241/http://primes.utm.edu/links/programs/sieves/Eratosthenes/C_source_code/ Sieve of Atkin Source Code: http://cr.yp.to/primegen.html A: Here is a Sieve of Eratosthenes that I wrote in PowerShell a few days ago. It has a parameter for identifying the number of prime numbers that should be returned. # # generate a list of primes up to a specific target using a sieve of eratosthenes # function getPrimes { #sieve of eratosthenes, http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes param ($target,$count = 0) $sieveBound = [math]::ceiling(( $target - 1 ) / 2) #not storing evens so count is lower than $target $sieve = @($false) * $sieveBound $crossLimit = [math]::ceiling(( [math]::sqrt($target) - 1 ) / 2) for ($i = 1; $i -le $crossLimit; $i ++) { if ($sieve[$i] -eq $false) { $prime = 2 * $i + 1 write-debug "Found: $prime" for ($x = 2 * $i * ( $i + 1 ); $x -lt $sieveBound; $x += 2 * $i + 1) { $sieve[$x] = $true } } } $primes = @(2) for ($i = 1; $i -le $sieveBound; $i ++) { if($count -gt 0 -and $primes.length -ge $count) { break; } if($sieve[$i] -eq $false) { $prime = 2 * $i + 1 write-debug "Output: $prime" $primes += $prime } } return $primes } A: This isn't strictly against the hardcoding restriction, but comes terribly close. Why not programatically download this list and print it out, instead? http://primes.utm.edu/lists/small/10000.txt A: Adapting and following on from GateKiller, here's the final version that I've used. public IEnumerable<long> PrimeNumbers(long number) { List<long> primes = new List<long>(); for (int i = 2; primes.Count < number; i++) { bool divisible = false; foreach (int num in primes) { if (i % num == 0) divisible = true; if (num > Math.Sqrt(i)) break; } if (divisible == false) primes.Add(i); } return primes; } It's basically the same, but I've added the "break on Sqrt" suggestion and changed some of the variables around to make it fit better for me. (I was working on Euler and needed the 10001th prime) A: The Sieve seems to be the wrong answer. The sieve gives you the primes up to a number N, not the first N primes. Run @Imran or @Andrew Szeto, and you get the primes up to N. The sieve might still be usable if you keep trying sieves for increasingly larger numbers until you hit a certain size of your result set, and use some caching of numbers already obtained, but I believe it would still be no faster than a solution like @Pat's. A: The deque sieve algorithm mentioned by BenGoldberg deserves a closer look, not only because it is very elegant but also because it can occasionally be useful in practice (unlike the Sieve of Atkin, which is a purely academical exercise). The basic idea behind the deque sieve algorithm is to use a small, sliding sieve that is only large enough to contain at least one separate multiple for each of the currently 'active' prime factors - i.e. those primes whose square does not exceed the lowest number currently represented by the moving sieve. Another difference to the SoE is that the deque sieve stores the actual factors into the slots of composites, not booleans. The algorithm extends the size of the sieve window as needed, resulting in fairly even performance over a wide range until the sieve starts exceeding the capacity of the CPU's L1 cache appreciably. The last prime that fits fully is 25,237,523 (the 1,579,791st prime), which gives a rough ballpark figure for the reasonable operating range of the algorithm. The algorithm is fairly simple and robust, and it has even performance over a much wider range than an unsegmented Sieve of Eratosthenes. The latter is a lot faster as long its sieve fits fully into the cache, i.e. up to 2^16 for an odds-only sieve with byte-sized bools. Then its performance drops more and more, although it always remains significantly faster than the deque despite the handicap (at least in compiled languages like C/C++, Pascal or Java/C#). Here is a rendering of the deque sieve algorithm in C#, because I find that language - despite its many flaws - much more practical for prototyping algorithms and experimentation than the supremely cumbersome and pedantic C++. (Sidenote: I'm using the free LINQPad which makes it possible to dive right in, without all the messiness with setting up projects, makefiles, directories or whatnot, and it gives me the same degree of interactivity as a python prompt). C# doesn't have an explicit deque type but the plain List<int> works well enough for demonstrating the algorithm. Note: this version does not use a deque for the primes, because it simply doesn't make sense to pop off sqrt(n) out of n primes. What good would it be to remove 100 primes and to leave 9900? At least this way all the primes are collected in a neat vector, ready for further processing. static List<int> deque_sieve (int n = 10000) { Trace.Assert(n >= 3); var primes = new List<int>() { 2, 3 }; var sieve = new List<int>() { 0, 0, 0 }; for (int sieve_base = 5, current_prime_index = 1, current_prime_squared = 9; ; ) { int base_factor = sieve[0]; if (base_factor != 0) { // the sieve base has a non-trivial factor - put that factor back into circulation mark_next_unmarked_multiple(sieve, base_factor); } else if (sieve_base < current_prime_squared) // no non-trivial factor -> found a non-composite { primes.Add(sieve_base); if (primes.Count == n) return primes; } else // sieve_base == current_prime_squared { // bring the current prime into circulation by injecting it into the sieve ... mark_next_unmarked_multiple(sieve, primes[current_prime_index]); // ... and elect a new current prime current_prime_squared = square(primes[++current_prime_index]); } // slide the sieve one step forward sieve.RemoveAt(0); sieve_base += 2; } } Here are the two helper functions: static void mark_next_unmarked_multiple (List<int> sieve, int prime) { int i = prime, e = sieve.Count; while (i < e && sieve[i] != 0) i += prime; for ( ; e <= i; ++e) // no List<>.Resize()... sieve.Add(0); sieve[i] = prime; } static int square (int n) { return n * n; } Probably the easiest way of understanding the algorithm is to imagine it as a special segmented Sieve of Eratosthenes with a segment size of 1, accompanied by an overflow area where the primes come to rest when they shoot over the end of the segment. Except that the single cell of the segment (a.k.a. sieve[0]) has already been sieved when we get to it, because it got run over while it was part of the overflow area. The number that is represented by sieve[0] is held in sieve_base, although sieve_front or window_base would also be a good names that allow to draw parallels to Ben's code or implementations of segmented/windowed sieves. If sieve[0] contains a non-zero value then that value is a factor of sieve_base, which can thus be recognised as composite. Since cell 0 is a multiple of that factor it is easy to compute its next hop, which is simply 0 plus that factor. Should that cell be occupied already by another factor then we simply add the factor again, and so on until we find a multiple of the factor where no other factor is currently parked (extending the sieve if needed). This also means that there is no need for storing the current working offsets of the various primes from one segment to the next, as in a normal segmented sieve. Whenever we find a factor in sieve[0], its current working offset is 0. The current prime comes into play in the following way. A prime can only become current after its own occurrence in the stream (i.e. when it has been detected as a prime, because not marked with a factor), and it will remain current until the exact moment that sieve[0] reaches its square. All lower multiples of this prime must have been struck off due to the activities of smaller primes, just like in a normal SoE. But none of the smaller primes can strike off the square, since the only factor of the square is the prime itself and it is not yet in circulation at this point. That explains the actions taken by the algorithm in the case sieve_base == current_prime_squared (which implies sieve[0] == 0, by the way). Now the case sieve[0] == 0 && sieve_base < current_prime_squared is easily explained: it means that sieve_base cannot be a multiple of any of the primes smaller than the current prime, or else it would have been marked as composite. I cannot be a higher multiple of the current prime either, since its value is less than the current prime's square. Hence it must be a new prime. The algorithm is obviously inspired by the Sieve of Eratosthenes, but equally obviously it is very different. The Sieve of Eratosthenes derives its superior speed from the simplicity of its elementary operations: one single index addition and one store for each step of the operation is all that it does for long stretches of time. Here is a simple, unsegmented Sieve of Eratosthenes that I normally use for sieving factor primes in the ushort range, i.e. up to 2^16. For this post I've modified it to work beyond 2^16 by substituting int for ushort static List<int> small_odd_primes_up_to (int n) { var result = new List<int>(); if (n < 3) return result; int sqrt_n_halved = (int)(Math.Sqrt(n) - 1) >> 1, max_bit = (n - 1) >> 1; var odd_composite = new bool[max_bit + 1]; for (int i = 3 >> 1; i <= sqrt_n_halved; ++i) if (!odd_composite[i]) for (int p = (i << 1) + 1, j = p * p >> 1; j <= max_bit; j += p) odd_composite[j] = true; result.Add(3); // needs to be handled separately because of the mod 3 wheel // read out the sieved primes for (int i = 5 >> 1, d = 1; i <= max_bit; i += d, d ^= 3) if (!odd_composite[i]) result.Add((i << 1) + 1); return result; } When sieving the first 10000 primes a typical L1 cache of 32 KiByte will be exceeded but the function is still very fast (fraction of a millisecond even in C#). If you compare this code to the deque sieve then it is easy to see that the operations of the deque sieve are a lot more complicated, and it cannot effectively amortise its overhead because it always does the shortest possible stretch of crossings-off in a row (exactly one single crossing-off, after skipping all multiples that have been crossed off already). Note: the C# code uses int instead of uint because newer compilers have a habit of generating substandard code for uint, probably in order to push people towards signed integers... In the C++ version of the code above I used unsigned throughout, naturally; the benchmark had to be in C++ because I wanted it be based on a supposedly adequate deque type (std::deque<unsigned>; there was no performance gain from using unsigned short). Here are the numbers for my Haswell laptop (VC++ 2015/x64): deque vs simple: 1.802 ms vs 0.182 ms deque vs simple: 1.836 ms vs 0.170 ms deque vs simple: 1.729 ms vs 0.173 ms Note: the C# times are pretty much exactly double the C++ timings, which is pretty good for C# and ìt shows that List<int> is no slouch even if abused as a deque. The simple sieve code still blows the deque out of the water, even though it is already operating beyond its normal working range (L1 cache size exceeded by 50%, with attendant cache thrashing). The dominating part here is the reading out of the sieved primes, and this is not affected much by the cache problem. In any case the function was designed for sieving the factors of factors, i.e. level 0 in a 3-level sieve hierarchy, and typically it has to return only a few hundred factors or a low number of thousands. Hence its simplicity. Performance could be improved by more than an order of magnitude by using a segmented sieve and optimising the code for extracting the sieved primes (stepped mod 3 and unrolled twice, or mod 15 and unrolled once) , and yet more performance could be squeezed out of the code by using a mod 16 or mod 30 wheel with all the trimmings (i.e. full unrolling for all residues). Something like that is explained in my answer to Find prime positioned prime number over on Code Review, where a similar problem was discussed. But it's hard to see the point in improving sub-millisecond times for a one-off task... To put things a bit into perspective, here are the C++ timings for sieving up to 100,000,000: deque vs simple: 1895.521 ms vs 432.763 ms deque vs simple: 1847.594 ms vs 429.766 ms deque vs simple: 1859.462 ms vs 430.625 ms By contrast, a segmented sieve in C# with a few bells and whistles does the same job in 95 ms (no C++ timings available, since I do code challenges only in C# at the moment). Things may look decidedly different in an interpreted language like Python where every operation has a heavy cost and the interpreter overhead dwarfs all differences due to predicted vs. mispredicted branches or sub-cycle ops (shift, addition) vs. multi-cycle ops (multiplication, and perhaps even division). That is bound to erode the simplicity advantage of the Sieve of Eratosthenes, and this could make the deque solution a bit more attractive. Also, many of the timings reported by other respondents in this topic are probably dominated by output time. That's an entirely different war, where my main weapon is a simple class like this: class CCWriter { const int SPACE_RESERVE = 11; // UInt31 + '\n' public static System.IO.Stream BaseStream; static byte[] m_buffer = new byte[1 << 16]; // need 55k..60k for a maximum-size range static int m_write_pos = 0; public static long BytesWritten = 0; // for statistics internal static ushort[] m_double_digit_lookup = create_double_digit_lookup(); internal static ushort[] create_double_digit_lookup () { var lookup = new ushort[100]; for (int lo = 0; lo < 10; ++lo) for (int hi = 0; hi < 10; ++hi) lookup[hi * 10 + lo] = (ushort)(0x3030 + (hi << 8) + lo); return lookup; } public static void Flush () { if (BaseStream != null && m_write_pos > 0) BaseStream.Write(m_buffer, 0, m_write_pos); BytesWritten += m_write_pos; m_write_pos = 0; } public static void WriteLine () { if (m_buffer.Length - m_write_pos < 1) Flush(); m_buffer[m_write_pos++] = (byte)'\n'; } public static void WriteLinesSorted (int[] values, int count) { int digits = 1, max_value = 9; for (int i = 0; i < count; ++i) { int x = values[i]; if (m_buffer.Length - m_write_pos < SPACE_RESERVE) Flush(); while (x > max_value) if (++digits < 10) max_value = max_value * 10 + 9; else max_value = int.MaxValue; int n = x, p = m_write_pos + digits, e = p + 1; m_buffer[p] = (byte)'\n'; while (n >= 10) { int q = n / 100, w = m_double_digit_lookup[n - q * 100]; n = q; m_buffer[--p] = (byte)w; m_buffer[--p] = (byte)(w >> 8); } if (n != 0 || x == 0) m_buffer[--p] = (byte)((byte)'0' + n); m_write_pos = e; } } } That takes less than 1 ms for writing 10000 (sorted) numbers. It's a static class because it is intended for textual inclusion in coding challenge submissions, with a minimum of fuss and zero overhead. In general I found it to be much faster if focussed work is done on entire batches, meaning sieve a certain range, then extract all primes into a vector/array, then blast out the whole array, then sieve the next range and so on, instead of mingling everything together. Having separate functions focussed on specific tasks also makes it easier to mix and match, it enables reuse, and it eases development/testing. A: In Haskell, we can write down almost word for word the mathematical definition of the sieve of Eratosthenes, "primes are natural numbers above 1 without any composite numbers, where composites are found by enumeration of each prime's multiples": import Data.List.Ordered (minus, union) primes = 2 : minus [3..] (foldr (\p r -> p*p : union [p*p+p, p*p+2*p..] r) [] primes) primes !! 10000 is near-instantaneous. References: * *Sieve of Eratosthenes *Richard Bird's sieve (see pp. 10,11) *minus, union The above code is easily tweaked into working on odds only, primes = 2 : 3 : minus [5,7..] (foldr (\p r -> p*p : union [p*p+2*p, p*p+4*p..] r) [] (tail primes)). Time complexity is much improved (to just about a log factor above optimal) by folding in a tree-like structure, and space complexity is drastically improved by multistage primes production, in primes = 2 : _Y ( (3:) . sieve 5 . _U . map (\p -> [p*p, p*p+2*p..]) ) where _Y g = g (_Y g) -- non-sharing fixpoint combinator _U ((x:xs):t) = x : (union xs . _U . pairs) t -- ~= nub.sort.concat pairs (xs:ys:t) = union xs ys : pairs t sieve k s@(x:xs) | k < x = k : sieve (k+2) s -- ~= [k,k+2..]\\s, | otherwise = sieve (k+2) xs -- when s⊂[k,k+2..] (In Haskell the parentheses are used for grouping, a function call is signified just by juxtaposition, (:) is a cons operator for lists, and (.) is a functional composition operator: (f . g) x = (\y -> f (g y)) x = f (g x)). A: GateKiller, how about adding a break to that if in the foreach loop? That would speed up things a lot because if like 6 is divisible by 2 you don't need to check with 3 and 5. (I'd vote your solution up anyway if I had enough reputation :-) ...) ArrayList primeNumbers = new ArrayList(); for(int i = 2; primeNumbers.Count < 10000; i++) { bool divisible = false; foreach(int number in primeNumbers) { if(i % number == 0) { divisible = true; break; } } if(divisible == false) { primeNumbers.Add(i); Console.Write(i + " "); } } A: @Matt: log(log(10000)) is ~2 From the wikipedia article (which you cited) Sieve of Atkin: This sieve computes primes up to N using O(N/log log N) operations with only N1/2+o(1) bits of memory. That is a little better than the sieve of Eratosthenes which uses O(N) operations and O(N1/2(log log N)/log N) bits of memory (A.O.L. Atkin, D.J. Bernstein, 2004). These asymptotic computational complexities include simple optimizations, such as wheel factorization, and splitting the computation to smaller blocks. Given asymptotic computational complexities along O(N) (for Eratosthenes) and O(N/log(log(N))) (for Atkin) we can't say (for small N=10_000) which algorithm if implemented will be faster. Achim Flammenkamp wrote in The Sieve of Eratosthenes: cited by: @num1 For intervals larger about 10^9, surely for those > 10^10, the Sieve of Eratosthenes is outperformed by the Sieve of Atkins and Bernstein which uses irreducible binary quadratic forms. See their paper for background informations as well as paragraph 5 of W. Galway's Ph.D. thesis. Therefore for 10_000 Sieve of Eratosthenes can be faster then Sieve of Atkin. To answer OP the code is prime_sieve.c (cited by num1) A: In Python import gmpy p=1 for i in range(10000): p=gmpy.next_prime(p) print p A: Here is my VB 2008 code, which finds all primes <10,000,000 in 1 min 27 secs on my work laptop. It skips even numbers and only looks for primes that are < the sqrt of the test number. It is only designed to find primes from 0 to a sentinal value. Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim TestNum As Integer Dim X As Integer Dim Z As Integer Dim TM As Single Dim TS As Single Dim TMS As Single Dim UnPrime As Boolean Dim Sentinal As Integer Button1.Text = "Thinking" Button1.Refresh() Sentinal = Val(SentinalTxt.Text) UnPrime = True Primes(0) = 2 Primes(1) = 3 Z = 1 TM = TimeOfDay.Minute TS = TimeOfDay.Second TMS = TimeOfDay.Millisecond For TestNum = 5 To Sentinal Step 2 Do While Primes(X) <> 0 And UnPrime And Primes(X) ^ 2 <= TestNum If Int(TestNum / Primes(X)) - (TestNum / Primes(X)) = 0 Then UnPrime = False End If X = X + 1 Loop If UnPrime = True Then X = X + 1 Z = Z + 1 Primes(Z) = TestNum End If UnPrime = True X = 0 Next Button1.Text = "Finished with " & Z TM = TimeOfDay.Minute - TM TS = TimeOfDay.Second - TS TMS = TimeOfDay.Millisecond - TMS ShowTime.Text = TM & ":" & TS & ":" & TMS End Sub A: The following Mathcad code calculated the first million primes in under 3 minutes. Bear in mind that this would be using floating point doubles for all of the numbers and is basically interpreted. I hope the syntax is clear. A: Here is a C++ solution, using a form of SoE: #include <iostream> #include <deque> typedef std::deque<int> mydeque; void my_insert( mydeque & factors, int factor ) { int where = factor, count = factors.size(); while( where < count && factors[where] ) where += factor; if( where >= count ) factors.resize( where + 1 ); factors[ where ] = factor; } int main() { mydeque primes; mydeque factors; int a_prime = 3, a_square_prime = 9, maybe_prime = 3; int cnt = 2; factors.resize(3); std::cout << "2 3 "; while( cnt < 10000 ) { int factor = factors.front(); maybe_prime += 2; if( factor ) { my_insert( factors, factor ); } else if( maybe_prime < a_square_prime ) { std::cout << maybe_prime << " "; primes.push_back( maybe_prime ); ++cnt; } else { my_insert( factors, a_prime ); a_prime = primes.front(); primes.pop_front(); a_square_prime = a_prime * a_prime; } factors.pop_front(); } std::cout << std::endl; return 0; } Note that this version of the Sieve can compute primes indefinitely. Also note, the STL deque takes O(1) time to perform push_back, pop_front, and random access though subscripting. The resize operation takes O(n) time, where n is the number of elements being added. Due to how we are using this function, we can treat this is a small constant. The body of the while loop in my_insert is executed O(log log n) times, where n equals the variable maybe_prime. This is because the condition expression of the while will evaluate to true once for each prime factor of maybe_prime. See "Divisor function" on Wikipedia. Multiplying by the number of times my_insert is called, shows that it should take O(n log log n) time to list n primes... which is, unsurprisingly, the time complexity which the Sieve of Eratosthenes is supposed to have. However, while this code is efficient, it's not the most efficient... I would strongly suggest using a specialized library for primes generation, such as primesieve. Any truly efficient, well optimized solution, will take more code than anyone wants to type into Stackoverflow. A: Using Sieve of Eratosthenes, computation is quite faster compare to "known-wide" prime numbers algorithm. By using pseudocode from it's wiki (https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes), I be able to have the solution on C#. /// Get non-negative prime numbers until n using Sieve of Eratosthenes. public int[] GetPrimes(int n) { if (n <= 1) { return new int[] { }; } var mark = new bool[n]; for(var i = 2; i < n; i++) { mark[i] = true; } for (var i = 2; i < Math.Sqrt(n); i++) { if (mark[i]) { for (var j = (i * i); j < n; j += i) { mark[j] = false; } } } var primes = new List<int>(); for(var i = 3; i < n; i++) { if (mark[i]) { primes.Add(i); } } return primes.ToArray(); } GetPrimes(100000000) takes 2s and 330ms. NOTE: Value might vary depend on Hardware Specifications. A: Here is my code which finds first 10,000 primes in 0.049655 sec on my laptop, first 1,000,000 primes in under 6 seconds and first 2,000,000 in 15 seconds A little explanation. This method uses 2 techniques to find prime number * *first of all any non-prime number is a composite of multiples of prime numbers so this code test by dividing the test number by smaller prime numbers instead of any number, this decreases calculation by atleast 10 times for a 4 digit number and even more for a bigger number *secondly besides dividing by prime, it only divides by prime numbers that are smaller or equal to the root of the number being tested further reducing the calculations greatly, this works because any number that is greater than root of the number will have a counterpart number that has to be smaller than root of the number but since we have tested all numbers smaller than the root already, Therefore we don't need to bother with number greater than the root of the number being tested. Sample output for first 10,000 prime number https://drive.google.com/open?id=0B2QYXBiLI-lZMUpCNFhZeUphck0 https://drive.google.com/open?id=0B2QYXBiLI-lZbmRtTkZETnp6Ykk Here is the code in C language, Enter 1 and then 10,000 to print out the first 10,000 primes. Edit: I forgot this contains math library ,if you are on windows or visual studio than that should be fine but on linux you must compile the code using -lm argument or the code may not work Example: gcc -Wall -o "%e" "%f" -lm #include <stdio.h> #include <math.h> #include <time.h> #include <limits.h> /* Finding prime numbers */ int main() { //pre-phase char d,w; int l,o; printf(" 1. Find first n number of prime numbers or Find all prime numbers smaller than n ?\n"); // this question helps in setting the limits on m or n value i.e l or o printf(" Enter 1 or 2 to get anwser of first or second question\n"); // decision making do { printf(" -->"); scanf("%c",&d); while ((w=getchar()) != '\n' && w != EOF); if ( d == '1') { printf("\n 2. Enter the target no. of primes you will like to find from 3 to 2,000,000 range\n -->"); scanf("%10d",&l); o=INT_MAX; printf(" Here we go!\n\n"); break; } else if ( d == '2' ) { printf("\n 2.Enter the limit under which to find prime numbers from 5 to 2,000,000 range\n -->"); scanf("%10d",&o); l=o/log(o)*1.25; printf(" Here we go!\n\n"); break; } else printf("\n Try again\n"); }while ( d != '1' || d != '2' ); clock_t start, end; double cpu_time_used; start = clock(); /* starting the clock for time keeping */ // main program starts here int i,j,c,m,n; /* i ,j , c and m are all prime array 'p' variables and n is the number that is being tested */ int s,x; int p[ l ]; /* p is the array for storing prime numbers and l sets the array size, l was initialized in pre-phase */ p[1]=2; p[2]=3; p[3]=5; printf("%10dst:%10d\n%10dnd:%10d\n%10drd:%10d\n",1,p[1],2,p[2],3,p[3]); // first three prime are set for ( i=4;i<=l;++i ) /* this loop sets all the prime numbers greater than 5 in the p array to 0 */ p[i]=0; n=6; /* prime number testing begins with number 6 but this can lowered if you wish but you must remember to update other variables too */ s=sqrt(n); /* 's' does two things it stores the root value so that program does not have to calaculate it again and again and also it stores it in integer form instead of float*/ x=2; /* 'x' is the biggest prime number that is smaller or equal to root of the number 'n' being tested */ /* j ,x and c are related in this way, p[j] <= prime number x <= p[c] */ // the main loop begins here for ( m=4,j=1,c=2; m<=l && n <= o;) /* this condition checks if all the first 'l' numbers of primes are found or n does not exceed the set limit o */ { // this will divide n by prime number in p[j] and tries to rule out non-primes if ( n%p[j]==0 ) { /* these steps execute if the number n is found to be non-prime */ ++n; /* this increases n by 1 and therefore sets the next number 'n' to be tested */ s=sqrt(n); /* this calaulates and stores in 's' the new root of number 'n' */ if ( p[c] <= s && p[c] != x ) /* 'The Magic Setting' tests the next prime number candidate p[c] and if passed it updates the prime number x */ { x=p[c]; ++c; } j=1; /* these steps sets the next number n to be tested and finds the next prime number x if possible for the new number 'n' and also resets j to 1 for the new cycle */ continue; /* and this restarts the loop for the new cycle */ } // confirmation test for the prime number candidate n else if ( n%p[j]!=0 && p[j]==x ) { /* these steps execute if the number is found to be prime */ p[m]=n; printf("%10dth:%10d\n",m,p[m]); ++n; s = sqrt(n); ++m; j=1; /* these steps stores and prints the new prime number and moves the 'm' counter up and also sets the next number n to be tested and also resets j to 1 for the new cycle */ continue; /* and this restarts the loop */ /* the next number which will be a even and non-prime will trigger the magic setting in the next cycle and therfore we do not have to add another magic setting here*/ } ++j; /* increases p[j] to next prime number in the array for the next cycle testing of the number 'n' */ // if the cycle reaches this point that means the number 'n' was neither divisible by p[j] nor was it a prime number // and therfore it will test the same number 'n' again in the next cycle with a bigger prime number } // the loops ends printf(" All done !!\n"); end = clock(); cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC; printf(" Time taken : %lf sec\n",cpu_time_used); } A: I have written this using python, as I just started learning it, and it works perfectly fine. The 10,000th prime generate by this code as same as mentioned in http://primes.utm.edu/lists/small/10000.txt. To check if n is prime or not, divide n by the numbers from 2 to sqrt(n). If any of this range of number perfectly divides n then it's not prime. import math print ("You want prime till which number??") a = input() a = int(a) x = 0 x = int(x) count = 1 print("2 is prime number") for c in range(3,a+1): b = math.sqrt(c) b = int(b) x = 0 for b in range(2,b+1): e = c % b e = int(e) if (e == 0): x = x+1 if (x == 0): print("%d is prime number" % c) count = count + 1 print("Total number of prime till %d is %d" % (a,count)) A: I spend some time writing a program calculating a lot of primes and this is the code I'm used to calculate a text file containing the first 1.000.000.000 primes. It's in German, but the interesting part is the method calcPrimes(). The primes are stored in an array called Primzahlen. I recommend a 64bit CPU because the calculations are with 64bit integers. import java.io.*; class Primzahlengenerator { long[] Primzahlen; int LastUnknown = 2; public static void main(String[] args) { Primzahlengenerator Generator = new Primzahlengenerator(); switch(args.length) { case 0: //Wenn keine Argumente übergeben worden: Generator.printHelp(); //Hilfe ausgeben return; //Durchfallen verhindern case 1: try { Generator.Primzahlen = new long[Integer.decode(args[0]).intValue()]; } catch (NumberFormatException e) { System.out.println("Das erste Argument muss eine Zahl sein, und nicht als Wort z.B. \"Tausend\", sondern in Ziffern z.B. \"1000\" ausgedrückt werden.");//Hinweis, wie man die Argumente angeben muss ausgeben Generator.printHelp(); //Generelle Hilfe ausgeben return; } break;//dutchfallen verhindern case 2: switch (args[1]) { case "-l": System.out.println("Sie müsen auch eine Datei angeben!"); //Hilfemitteilung ausgeben Generator.printHelp(); //Generelle Hilfe ausgeben return; } break;//durchfallen verhindern case 3: try { Generator.Primzahlen = new long[Integer.decode(args[0]).intValue()]; } catch (NumberFormatException e) { System.out.println("Das erste Argument muss eine Zahl sein, und nicht als Wort z.B. \"Tausend\", sondern in Ziffern z.B. \"1000\" ausgedrückt werden.");//Hinweis, wie man die Argumente angeben muss ausgeben Generator.printHelp(); //Generelle Hilfe ausgeben return; } switch(args[1]) { case "-l": Generator.loadFromFile(args[2]);//Datei Namens des Inhalts von Argument 3 lesen, falls Argument 2 = "-l" ist break; default: Generator.printHelp(); break; } break; default: Generator.printHelp(); return; } Generator.calcPrims(); } void printHelp() { System.out.println("Sie müssen als erstes Argument angeben, die wieviel ersten Primzahlen sie berechnen wollen."); //Anleitung wie man das Programm mit Argumenten füttern muss System.out.println("Als zweites Argument können sie \"-l\" wählen, worauf die Datei, aus der die Primzahlen geladen werden sollen,"); System.out.println("folgen muss. Sie muss genauso aufgebaut sein, wie eine Datei Primzahlen.txt, die durch den Aufruf \"java Primzahlengenerator 1000 > Primzahlen.txt\" entsteht."); } void loadFromFile(String File) { // System.out.println("Lese Datei namens: \"" + File + "\""); try{ int x = 0; BufferedReader in = new BufferedReader(new FileReader(File)); String line; while((line = in.readLine()) != null) { Primzahlen[x] = new Long(line).longValue(); x++; } LastUnknown = x; } catch(FileNotFoundException ex) { System.out.println("Die angegebene Datei existiert nicht. Bitte geben sie eine existierende Datei an."); } catch(IOException ex) { System.err.println(ex); } catch(ArrayIndexOutOfBoundsException ex) { System.out.println("Die Datei enthält mehr Primzahlen als der reservierte Speicherbereich aufnehmen kann. Bitte geben sie als erstes Argument eine größere Zahl an,"); System.out.println("damit alle in der Datei enthaltenen Primzahlen aufgenommen werden können."); } /* for(long prim : Primzahlen) { System.out.println("" + prim); } */ //Hier soll code stehen, der von der Datei mit angegebenem Namen ( Wie diese aussieht einfach durch angeben von folgendem in cmd rausfinden: //java Primzahlengenerator 1000 > 1000Primzahlen.txt //da kommt ne textdatei, die die primzahlen enthält. mit Long.decode(String ziffern).longValue(); //erhält man das was an der entsprechenden stelle in das array soll. die erste zeile soll in [0] , die zweite zeile in [1] und so weiter. //falls im arry der platz aus geht(die exception kenn ich grad nich, aber mach mal: //int[] foo = { 1, 2, 3}; //int bar = foo[4]; //dann kriegst ne exception, das ist die gleiche die man kriegt, wenn im arry der platzt aus geht. } void calcPrims() { int PrimzahlNummer = LastUnknown; // System.out.println("LAstUnknown ist: " + LastUnknown); Primzahlen[0] = 2; Primzahlen[1] = 3; long AktuelleZahl = Primzahlen[PrimzahlNummer - 1]; boolean IstPrimzahl; // System.out.println("2"); // System.out.println("3"); int Limit = Primzahlen.length; while(PrimzahlNummer < Limit) { IstPrimzahl = true; double WurzelDerAktuellenZahl = java.lang.Math.sqrt(AktuelleZahl); for(int i = 1;i < PrimzahlNummer;i++) { if(AktuelleZahl % Primzahlen[i] == 0) { IstPrimzahl = false; break; } if(Primzahlen[i] > WurzelDerAktuellenZahl) break; } if(IstPrimzahl) { Primzahlen[PrimzahlNummer] = AktuelleZahl; PrimzahlNummer++; // System.out.println("" + AktuelleZahl); } AktuelleZahl = AktuelleZahl + 2; } for(long prim : Primzahlen) { System.out.println("" + prim); } } } A: I have been working on find primes for about a year. This is what I found to be the fastest: import static java.lang.Math.sqrt; import java.io.PrintWriter; import java.io.File; public class finder { public static void main(String[] args) { primelist primes = new primelist(); primes.insert(3); primes.insert(5); File file = new File("C:/Users/Richard/Desktop/directory/file0024.txt"); file.getParentFile().mkdirs(); long time = System.nanoTime(); try{ PrintWriter printWriter = new PrintWriter ("file0024.txt"); int linenum = 0; printWriter.print("2"); printWriter.print (" , "); printWriter.print("3"); printWriter.print (" , "); int up; int down; for(int i =1; i<357913941;i++){// if(linenum%10000==0){ printWriter.println (""); linenum++; } down = i*6-1; if(primes.check(down)){ primes.insert(down); //System.out.println(i*6-1); printWriter.print ( down ); printWriter.print (" , "); linenum++; } up = i*6+1; if(primes.check(up)){ primes.insert(up); //System.out.println(i*6+1); printWriter.print ( up ); printWriter.print (" , "); linenum++; } } printWriter.println ("Time to execute"); printWriter.println (System.nanoTime()-time); //System.out.println(primes.length); printWriter.close (); }catch(Exception e){} } } class node{ node next; int x; public node (){ node next; x = 3; } public node(int z) { node next; x = z; } } class primelist{ node first; int length =0; node current; public void insert(int x){ node y = new node(x); if(current == null){ current = y; first = y; }else{ current.next = y; current = y; } length++; } public boolean check(int x){ int p = (int)sqrt(x); node y = first; for(int i = 0;i<length;i++){ if(y.x>p){ return true; }else if(x%y.x ==0){ return false; } y = y.next; } return true; } } 1902465190909 nano seconds to get to 2147483629 starting at 2. A: Here the code that I made : enter code here #include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int main() { /* Enter your code here. Read input from STDIN. Print output to STDOUT*/ unsigned long int n; int prime(unsigned long int); scanf("%ld",&n); unsigned long int val; for(unsigned long int i=0;i<n;i++) { int flag=0; scanf("%ld",&val); flag=prime(val); if(flag==1) printf("yes\n"); else printf("no\n"); } return 0; } int prime(unsigned long int n) { if(n==2) return 1; else if (n == 1||n%2==0) return 0; for (unsigned long int i=3; i<=sqrt(n); i+=2) if (n%i == 0) return 0; return 1; } A: Using the Array.prototype.find() method in Javascript. 2214.486 ms function isPrime (number) { function prime(element) { let start = 2; while (start <= Math.sqrt(element)) { if (element % start++ < 1) { return false; } } return element > 1; } return [number].find(prime) } function logPrimes (n) { let count = 0 let nth = n let i = 0 while (count < nth) { if (isPrime(i)) { count++ console.log('i', i) //NOTE: If this line is ommited time to find 10,000th prime is 121.157ms if (count === nth) { console.log('while i', i) console.log('count', count) } } i++ } } console.time(logPrimes) logPrimes(10000) console.timeEnd(logPrimes) // 2214.486ms A: I can give you some tips, you have to implement it. * *For each number, get the half of that number. E.g. for checking 21, only obtain the remainder by dividing it from range 2-10. *If its an odd number, only divide by odd number, and vice versa. Such as for 21, divide with 3, 5, 7, 9 only. Most efficient method I got up to so far. A: Since you want first 10000 primes only, rather than coding complex algorithm I'll suggest the following boolean isPrime(int n){ //even but is prime if(n==2) return true; //even numbers filtered already if(n==0 || n==1 || n%2==0) return false; // loop for checking only odd factors // i*i <= n (same as i<=sqrt(n), avoiding floating point calculations) for(int i=3 ; i*i <=n ; i+=2){ // if any odd factor divides n then its not a prime! if(n%i==0) return false; } // its prime now return true; } now call is prime as you need it for(int i=1 ; i<=1000 ; i++){ if(isPrime(i)){ //do something } } A: This is an old question, but there's something here everyone's missing... For primes this small, trial division isn't that slow... there are only 25 primes under 100. With so few primes to test, and such small primes, we can pull out a neat trick! If a is coprime to b, then gcd a b = 1. Coprime. Fun word. Means it doesn't share any prime factors. We can thus test for divisibility by several primes with one GCD call. How many? Well, the product of the first 15 primes is less than 2^64. And the product of the next 10 is also less than 2^64. That's all 25 that we need. But is it worth it? Let's see: check x = null $ filter ((==0) . (x `mod`)) $ [<primes up to 101>] Prelude> length $ filter check [101,103..85600] >>> 9975 (0.30 secs, 125,865,152 bytes a = 16294579238595022365 :: Word64 b = 14290787196698157718 pre = [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97] primes = (pre ++) $ filter ((==1) . gcd a) $ filter ((==1) . gcd b) [99,101..85600] main = print $ length primes Prelude> main >>> 10000 (0.05 secs, 36,387,520 bytes) A 6 fold improvement there. (length is to force the list to be computed. By default Haskell prints things 1 Unicode character at a time and so actually printing the list will either dominate the time or dominate the amount of actual code used.) Of course, this is running in GHCi - a repl running interpreted code - on an old laptop and it is not interpreting any of these numbers as int64s or even BigInts, nor will it even if you ask it to (well, you can force it, but it's ugly and doesn't really help). It is interpreting every single number there as generalized Integer-like things that can be specialized to some particular type via dictionary lookup, and it is traversing a linked list (which is not fused away here as it's not compiled) 3 times. Interestingly, hand fusing the two filters actually slows it down in the REPL. Let's compile it: ...\Haskell\8.6\Testbed>Primes.exe +RTS -s 10000 606,280 bytes allocated in the heap Total time 0.000s ( 0.004s elapsed) Using the RTS report because Windows. Some lines trimmed because they aren't relevant - they were other GC data, or measurements of only part of the execution, and together add up to 0.004s (or less). It's also not constant folding, because Haskell doesn't actually do much of that. If we constant fold ourselves (main = print 10000), we get dramatically lower allocation: ...Haskell\8.6\Testbed>Primes.exe +RTS -s 10000 47,688 bytes allocated in the heap Total time 0.000s ( 0.001s elapsed) Literally just enough to load the runtime, then discover there's nothing to do but print a number and exit. Let's add wheel factorization: wheel = scanl (+) 7 $ cycle [4, 2, 4, 2, 4, 6, 2, 6] primes = (pre ++) $ filter ((==1) . gcd a) $ filter ((==1) . gcd b) $ takeWhile (<85600) wheel Total time 0.000s ( 0.003s elapsed) Cut down approximately 1/3rd relative to our reference of main = print 10000, but there's definitely room for more optimization. It actually stopped to perform a GC in there for example, while with tweaking there shouldn't be any heap use. For some reason, compiling for profiling here actually cuts the runtime down to 2 milliseconds: Tue Nov 12 21:13 2019 Time and Allocation Profiling Report (Final) Primes.exe +RTS -p -RTS total time = 0.00 secs (2 ticks @ 1000 us, 1 processor) total alloc = 967,120 bytes (excludes profiling overheads) I'm going to leave this as is for now, I'm pretty sure random jitter is starting to dominate. A: def compute_primes(bound): """ Return a list of the prime numbers in range(2, bound) Implement the Sieve of Eratosthenes https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes """ primeNumber = [True for i in range(bound + 1)] start_prime_number = 2 primes = [] while start_prime_number * start_prime_number <=bound: # If primeNumber[start_prime_number] is not changed, then it is a prime if primeNumber[start_prime_number]: # Update all multiples of start_prime_number for i in range(start_prime_number * start_prime_number, bound + 1, start_prime_number): primeNumber[i] = False start_prime_number += 1 # Print all prime numbers for start_prime_number in range(2, bound + 1): if primeNumber[start_prime_number]: primes.append(start_prime_number) return primes print(len(compute_primes(200))) print(len(compute_primes(2000))) A: This is a Python code that prints prime numbers between 1 to 1000000. import math k=0 factor=0 pl=[] for i in range(1,1000000): k=int(math.sqrt(i)) if i==2 or i==3: pl.append(i) for j in range(2,k+1): if i%j==0: factor=factor+1 elif factor==0 and j==k: pl.append(i) factor=0 print(pl) print(len(pl)) A: using System; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { int n, i = 3, j, c; Console.WriteLine("Please enter your integer: "); n = Convert.ToInt32(Console.ReadLine()); if (n >= 1) { Console.WriteLine("First " + n + " Prime Numbers are"); Console.WriteLine("2"); } for(j=2;j<=n;) { for(c=2;c<=i-1;c++) { if(i%c==0) break; } if(c==i) { Console.WriteLine(i); j++; } i++; } Console.Read(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: When to use lambda, when to use Proc.new? In Ruby 1.8, there are subtle differences between proc/lambda on the one hand, and Proc.new on the other. * *What are those differences? *Can you give guidelines on how to decide which one to choose? *In Ruby 1.9, proc and lambda are different. What's the deal? A: To provide further clarification: Joey says that the return behavior of Proc.new is surprising. However when you consider that Proc.new behaves like a block this is not surprising as that is exactly how blocks behave. lambas on the other hand behave more like methods. This actually explains why Procs are flexible when it comes to arity (number of arguments) whereas lambdas are not. Blocks don't require all their arguments to be provided but methods do (unless a default is provided). While providing lambda argument default is not an option in Ruby 1.8, it is now supported in Ruby 1.9 with the alternative lambda syntax (as noted by webmat): concat = ->(a, b=2){ "#{a}#{b}" } concat.call(4,5) # => "45" concat.call(1) # => "12" And Michiel de Mare (the OP) is incorrect about the Procs and lambda behaving the same with arity in Ruby 1.9. I have verified that they still maintain the behavior from 1.8 as specified above. break statements don't actually make much sense in either Procs or lambdas. In Procs, the break would return you from Proc.new which has already been completed. And it doesn't make any sense to break from a lambda since it's essentially a method, and you would never break from the top level of a method. next, redo, and raise behave the same in both Procs and lambdas. Whereas retry is not allowed in either and will raise an exception. And finally, the proc method should never be used as it is inconsistent and has unexpected behavior. In Ruby 1.8 it actually returns a lambda! In Ruby 1.9 this has been fixed and it returns a Proc. If you want to create a Proc, stick with Proc.new. For more information, I highly recommend O'Reilly's The Ruby Programming Language which is my source for most of this information. A: Closures in Ruby is a good overview for how blocks, lambda and proc work in Ruby, with Ruby. A: I didn't notice any comments on the third method in the queston, "proc" which is deprecated, but handled differently in 1.8 and 1.9. Here's a fairly verbose example that makes it easy to see the differences between the three similar calls: def meth1 puts "method start" pr = lambda { return } pr.call puts "method end" end def meth2 puts "method start" pr = Proc.new { return } pr.call puts "method end" end def meth3 puts "method start" pr = proc { return } pr.call puts "method end" end puts "Using lambda" meth1 puts "--------" puts "using Proc.new" meth2 puts "--------" puts "using proc" meth3 A: lambda works as expected, like in other languages. The wired Proc.new is surprising and confusing. The return statement in proc created by Proc.new will not only return control just from itself, but also from the method enclosing it. def some_method myproc = Proc.new {return "End."} myproc.call # Any code below will not get executed! # ... end You can argue that Proc.new inserts code into the enclosing method, just like block. But Proc.new creates an object, while block are part of an object. And there is another difference between lambda and Proc.new, which is their handling of (wrong) arguments. lambda complains about it, while Proc.new ignores extra arguments or considers the absence of arguments as nil. irb(main):021:0> l = -> (x) { x.to_s } => #<Proc:0x8b63750@(irb):21 (lambda)> irb(main):022:0> p = Proc.new { |x| x.to_s} => #<Proc:0x8b59494@(irb):22> irb(main):025:0> l.call ArgumentError: wrong number of arguments (0 for 1) from (irb):21:in `block in irb_binding' from (irb):25:in `call' from (irb):25 from /usr/bin/irb:11:in `<main>' irb(main):026:0> p.call => "" irb(main):049:0> l.call 1, 2 ArgumentError: wrong number of arguments (2 for 1) from (irb):47:in `block in irb_binding' from (irb):49:in `call' from (irb):49 from /usr/bin/irb:11:in `<main>' irb(main):050:0> p.call 1, 2 => "1" BTW, proc in Ruby 1.8 creates a lambda, while in Ruby 1.9+ behaves like Proc.new, which is really confusing. A: I found this page which shows what the difference between Proc.new and lambda are. According to the page, the only difference is that a lambda is strict about the number of arguments it accepts, whereas Proc.new converts missing arguments to nil. Here is an example IRB session illustrating the difference: irb(main):001:0> l = lambda { |x, y| x + y } => #<Proc:0x00007fc605ec0748@(irb):1> irb(main):002:0> p = Proc.new { |x, y| x + y } => #<Proc:0x00007fc605ea8698@(irb):2> irb(main):003:0> l.call "hello", "world" => "helloworld" irb(main):004:0> p.call "hello", "world" => "helloworld" irb(main):005:0> l.call "hello" ArgumentError: wrong number of arguments (1 for 2) from (irb):1 from (irb):5:in `call' from (irb):5 from :0 irb(main):006:0> p.call "hello" TypeError: can't convert nil into String from (irb):2:in `+' from (irb):2 from (irb):6:in `call' from (irb):6 from :0 The page also recommends using lambda unless you specifically want the error tolerant behavior. I agree with this sentiment. Using a lambda seems a tad more concise, and with such an insignificant difference, it seems the better choice in the average situation. As for Ruby 1.9, sorry, I haven't looked into 1.9 yet, but I don't imagine they would change it all that much (don't take my word for it though, it seems you have heard of some changes, so I am probably wrong there). A: Another important but subtle difference between procs created with lambda and procs created with Proc.new is how they handle the return statement: * *In a lambda-created proc, the return statement returns only from the proc itself *In a Proc.new-created proc, the return statement is a little more surprising: it returns control not just from the proc, but also from the method enclosing the proc! Here's lambda-created proc's return in action. It behaves in a way that you probably expect: def whowouldwin mylambda = lambda {return "Freddy"} mylambda.call # mylambda gets called and returns "Freddy", and execution # continues on the next line return "Jason" end whowouldwin #=> "Jason" Now here's a Proc.new-created proc's return doing the same thing. You're about to see one of those cases where Ruby breaks the much-vaunted Principle of Least Surprise: def whowouldwin2 myproc = Proc.new {return "Freddy"} myproc.call # myproc gets called and returns "Freddy", # but also returns control from whowhouldwin2! # The line below *never* gets executed. return "Jason" end whowouldwin2 #=> "Freddy" Thanks to this surprising behavior (as well as less typing), I tend to favor using lambda over Proc.new when making procs. A: To elaborate on Accordion Guy's response: Notice that Proc.new creates a proc out by being passed a block. I believe that lambda {...} is parsed as a sort of literal, rather than a method call which passes a block. returning from inside a block attached to a method call will return from the method, not the block, and the Proc.new case is an example of this at play. (This is 1.8. I don't know how this translates to 1.9.) A: I am a bit late on this, but there is one great but little known thing about Proc.new not mentioned in comments at all. As by documentation: Proc::new may be called without a block only within a method with an attached block, in which case that block is converted to the Proc object. That said, Proc.new lets to chain yielding methods: def m1 yield 'Finally!' if block_given? end def m2 m1 &Proc.new end m2 { |e| puts e } #⇒ Finally! A: It's worth emphasizing that return in a proc returns from the lexically enclosing method, i.e. the method where the proc was created, not the method that called the proc. This is a consequence of the closure property of procs. So the following code outputs nothing: def foo proc = Proc.new{return} foobar(proc) puts 'foo' end def foobar(proc) proc.call puts 'foobar' end foo Although the proc executes in foobar, it was created in foo and so the return exits foo, not just foobar. As Charles Caldwell wrote above, it has a GOTO feel to it. In my opinion, return is fine in a block that is executed in its lexical context, but is much less intuitive when used in a proc that is executed in a different context. A: Proc is older, but the semantics of return are highly counterintuitive to me (at least when I was learning the language) because: * *If you are using proc, you are most likely using some kind of functional paradigm. *Proc can return out of the enclosing scope (see previous responses), which is a goto basically, and highly non-functional in nature. Lambda is functionally safer and easier to reason about - I always use it instead of proc. A: I can't say much about the subtle differences. However, I can point out that Ruby 1.9 now allows optional parameters for lambdas and blocks. Here's the new syntax for the stabby lambdas under 1.9: stabby = ->(msg='inside the stabby lambda') { puts msg } Ruby 1.8 didn't have that syntax. Neither did the conventional way of declaring blocks/lambdas support optional args: # under 1.8 l = lambda { |msg = 'inside the stabby lambda'| puts msg } SyntaxError: compile error (irb):1: syntax error, unexpected '=', expecting tCOLON2 or '[' or '.' l = lambda { |msg = 'inside the stabby lambda'| puts msg } Ruby 1.9, however, supports optional arguments even with the old syntax: l = lambda { |msg = 'inside the regular lambda'| puts msg } #=> #<Proc:0x0e5dbc@(irb):1 (lambda)> l.call #=> inside the regular lambda l.call('jeez') #=> jeez If you wanna build Ruby1.9 for Leopard or Linux, check out this article (shameless self promotion). A: A good way to see it is that lambdas are executed in their own scope (as if it was a method call), while Procs may be viewed as executed inline with the calling method, at least that's a good way of deciding wich one to use in each case. A: Short answer: What matters is what return does: lambda returns out of itself, and proc returns out of itself AND the function that called it. What is less clear is why you want to use each. lambda is what we expect things should do in a functional programming sense. It is basically an anonymous method with the current scope automatically bound. Of the two, lambda is the one you should probably be using. Proc, on the other hand, is really useful for implementing the language itself. For example you can implement "if" statements or "for" loops with them. Any return found in the proc will return out of the method that called it, not the just the "if" statement. This is how languages work, how "if" statements work, so my guess is Ruby uses this under the covers and they just exposed it because it seemed powerful. You would only really need this if you are creating new language constructs like loops, if-else constructs, etc. A: The difference in behaviour with return is IMHO the most important difference between the 2. I also prefer lambda because it's less typing than Proc.new :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "345" }
Q: Swap unique indexed column values in database I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are: * *Delete both rows and re-insert them. *Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out? A: Further to Andy Irving's answer this worked for me (on SQL Server 2005) in a similar situation where I have a composite key and I need to swap a field which is part of the unique constraint. key: pID, LNUM rec1: 10, 0 rec2: 10, 1 rec3: 10, 2 and I need to swap LNUM so that the result is key: pID, LNUM rec1: 10, 1 rec2: 10, 2 rec3: 10, 0 the SQL needed: UPDATE DOCDATA SET LNUM = CASE LNUM WHEN 0 THEN 1 WHEN 1 THEN 2 WHEN 2 THEN 0 END WHERE (pID = 10) AND (LNUM IN (0, 1, 2)) A: There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement. The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation. Pseudo-code: -- setup initial data values: insert into data_table(id, name) values(1, 'A') insert into data_table(id, name) values(2, 'B') -- create temp table that matches live table select top 0 * into #tmp_data_table from data_table -- insert records to be swapped insert into #tmp_data_table(id, name) values(1, 'B') insert into #tmp_data_table(id, name) values(2, 'A') -- update both rows at once! No index violations! update data_table set name = #tmp_data_table.name from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id) Thanks to Rich H for this technique. - Mark A: Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level: CREATE TABLE testing ( cola int NOT NULL, colb CHAR(1) NOT NULL ); CREATE UNIQUE INDEX UIX_testing_a ON testing(colb); INSERT INTO testing VALUES (1, 'b'); INSERT INTO testing VALUES (2, 'a'); SELECT * FROM testing; UPDATE testing SET colb = CASE cola WHEN 1 THEN 'a' WHEN 2 THEN 'b' END WHERE cola IN (1,2); SELECT * FROM testing; so you will go from: cola colb ------------ 1 b 2 a to: cola colb ------------ 1 a 2 b A: The magic word is DEFERRABLE here: DROP TABLE ztable CASCADE; CREATE TABLE ztable ( id integer NOT NULL PRIMARY KEY , payload varchar ); INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' ); SELECT * FROM ztable; -- This works, because there is no constraint UPDATE ztable t1 SET payload=t2.payload FROM ztable t2 WHERE t1.id IN (2,3) AND t2.id IN (2,3) AND t1.id <> t2.id ; SELECT * FROM ztable; ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload) DEFERRABLE INITIALLY DEFERRED ; -- This should also work, because the constraint -- is deferred until "commit time" UPDATE ztable t1 SET payload=t2.payload FROM ztable t2 WHERE t1.id IN (2,3) AND t2.id IN (2,3) AND t1.id <> t2.id ; SELECT * FROM ztable; RESULT: DROP TABLE NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable" CREATE TABLE INSERT 0 3 id | payload ----+--------- 1 | one 2 | two 3 | three (3 rows) UPDATE 2 id | payload ----+--------- 1 | one 2 | three 3 | two (3 rows) NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable" ALTER TABLE UPDATE 2 id | payload ----+--------- 1 | one 2 | two 3 | three (3 rows) A: I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update. An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact. A: I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes. I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them. I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger. I'll post back and let you know my mileage. A: In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.) However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows. A: For Oracle there is an option, DEFERRED, but you have to add it to your constraint. SET CONSTRAINT emp_no_fk_par DEFERRED; To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement. Source A: I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0. Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though. A: I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of. If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful. But in short: there is no other solution than the ones you provided. A: Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL. A: 1) switch the ids for name id student 1 Abbot 2 Doris 3 Emerson 4 Green 5 Jeames For the sample input, the output is: id student 1 Doris 2 Abbot 3 Green 4 Emerson 5 Jeames "in case n number of rows how will manage......"
{ "language": "en", "url": "https://stackoverflow.com/questions/644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: Automatically update version number I would like the version property of my application to be incremented for each build but I'm not sure on how to enable this functionality in Visual Studio (2005/2008). I have tried to specify the AssemblyVersion as 1.0.* but it doesn't get me exactly what I want. I'm also using a settings file and in earlier attempts when the assembly version changed my settings got reset to the default since the application looked for the settings file in another directory. I would like to be able to display a version number in the form of 1.1.38 so when a user finds a problem I can log the version they are using as well as tell them to upgrade if they have an old release. A short explanation of how the versioning works would also be appreciated. When does the build and revision number get incremented? A: With the "Built in" stuff, you can't, as using 1.0.* or 1.0.0.* will replace the revision and build numbers with a coded date/timestamp, which is usually also a good way. For more info, see the Assembly Linker Documentation in the /v tag. As for automatically incrementing numbers, use the AssemblyInfo Task: AssemblyInfo Task This can be configured to automatically increment the build number. There are 2 Gotchas: * *Each of the 4 numbers in the Version string is limited to 65535. This is a Windows Limitation and unlikely to get fixed. * *Why are build numbers limited to 65535? *Using with with Subversion requires a small change: * *Using MSBuild to generate assembly version info at build time (including SubVersion fix) Retrieving the Version number is then quite easy: Version v = Assembly.GetExecutingAssembly().GetName().Version; string About = string.Format(CultureInfo.InvariantCulture, @"YourApp Version {0}.{1}.{2} (r{3})", v.Major, v.Minor, v.Build, v.Revision); And, to clarify: In .net or at least in C#, the build is actually the THIRD number, not the fourth one as some people (for example Delphi Developers who are used to Major.Minor.Release.Build) might expect. In .net, it's Major.Minor.Build.Revision. A: [Visual Studio 2017, .csproj properties] To automatically update your PackageVersion/Version/AssemblyVersion property (or any other property), first, create a new Microsoft.Build.Utilities.Task class that will get your current build number and send back the updated number (I recommend to create a separate project just for that class). I manually update the major.minor numbers, but let MSBuild to automatically update the build number (1.1.1, 1.1.2, 1.1.3, etc. :) using Microsoft.Build.Framework; using System; using System.Collections.Generic; using System.Text; public class RefreshVersion : Microsoft.Build.Utilities.Task { [Output] public string NewVersionString { get; set; } public string CurrentVersionString { get; set; } public override bool Execute() { Version currentVersion = new Version(CurrentVersionString ?? "1.0.0"); DateTime d = DateTime.Now; NewVersionString = new Version(currentVersion.Major, currentVersion.Minor, currentVersion.Build+1).ToString(); return true; } } Then call your recently created Task on MSBuild process adding the next code on your .csproj file: <Project Sdk="Microsoft.NET.Sdk"> ... <UsingTask TaskName="RefreshVersion" AssemblyFile="$(MSBuildThisFileFullPath)\..\..\<dll path>\BuildTasks.dll" /> <Target Name="RefreshVersionBuildTask" BeforeTargets="Pack" Condition="'$(Configuration)|$(Platform)'=='Release|AnyCPU'"> <RefreshVersion CurrentVersionString="$(PackageVersion)"> <Output TaskParameter="NewVersionString" PropertyName="NewVersionString" /> </RefreshVersion> <Message Text="Updating package version number to $(NewVersionString)..." Importance="high" /> <XmlPoke XmlInputPath="$(MSBuildProjectDirectory)\mustache.website.sdk.dotNET.csproj" Query="/Project/PropertyGroup/PackageVersion" Value="$(NewVersionString)" /> </Target> ... <PropertyGroup> .. <PackageVersion>1.1.4</PackageVersion> .. When picking Visual Studio Pack project option (just change to BeforeTargets="Build" for executing the task before Build) the RefreshVersion code will be triggered to calculate the new version number, and XmlPoke task will update your .csproj property accordingly (yes, it will modify the file). When working with NuGet libraries, I also send the package to NuGet repository by just adding the next build task to the previous example. <Message Text="Uploading package to NuGet..." Importance="high" /> <Exec WorkingDirectory="$(MSBuildProjectDirectory)\bin\release" Command="c:\nuget\nuget push *.nupkg -Source https://www.nuget.org/api/v2/package" IgnoreExitCode="true" /> c:\nuget\nuget is where I have the NuGet client (remember to save your NuGet API key by calling nuget SetApiKey <my-api-key> or to include the key on the NuGet push call). Just in case it helps someone ^_^. A: What source control system are you using? Almost all of them have some form of $ Id $ tag that gets expanded when the file is checked in. I usually use some form of hackery to display this as the version number. The other alternative is use to use the date as the build number: 080803-1448 A: VS.NET defaults the Assembly version to 1.0.* and uses the following logic when auto-incrementing: it sets the build part to the number of days since January 1st, 2000, and sets the revision part to the number of seconds since midnight, local time, divided by two. See this MSDN article. Assembly version is located in an assemblyinfo.vb or assemblyinfo.cs file. From the file: ' Version information for an assembly consists of the following four values: ' ' Major Version ' Minor Version ' Build Number ' Revision ' ' You can specify all the values or you can default the Build and Revision Numbers ' by using the '*' as shown below: ' <Assembly: AssemblyVersion("1.0.*")> <Assembly: AssemblyVersion("1.0.0.0")> <Assembly: AssemblyFileVersion("1.0.0.0")> A: I have found that it works well to simply display the date of the last build using the following wherever a product version is needed: System.IO.File.GetLastWriteTime(System.Reflection.Assembly.GetExecutingAssembly().Location).ToString("yyyy.MM.dd.HH.mm.ss") Rather than attempting to get the version from something like the following: System.Reflection.Assembly assembly = System.Reflection.Assembly.GetExecutingAssembly(); object[] attributes = assembly.GetCustomAttributes(typeof(System.Reflection.AssemblyFileVersionAttribute), false); object attribute = null; if (attributes.Length > 0) { attribute = attributes[0] as System.Reflection.AssemblyFileVersionAttribute; } A: Some time ago I wrote a quick and dirty exe that would update the version #'s in an assemblyinfo.{cs/vb} - I also have used rxfind.exe (a simple and powerful regex-based search replace tool) to do the update from a command line as part of the build process. A couple of other helpfule hints: * *separate the assemblyinfo into product parts (company name, version, etc.) and assembly specific parts (assembly name etc.). See here *Also - i use subversion, so I found it helpful to set the build number to subversion revision number thereby making it really easy to always get back to the codebase that generated the assembly (e.g. 1.4.100.1502 was built from revision 1502). A: If you want an auto incrementing number that updates each time a compilation is done, you can use VersionUpdater from a pre-build event. Your pre-build event can check the build configuration if you prefer so that the version number will only increment for a Release build (for example). A: Here is a handcranked alternative option: This is a quick-and-dirty PowerShell snippet I wrote that gets called from a pre-build step on our Jenkins build system. It sets the last digit of the AssemblyVersion and AssemblyFileVersion to the value of the BUILD_NUMBER environment variable which is automatically set by the build system. if (Test-Path env:BUILD_NUMBER) { Write-Host "Updating AssemblyVersion to $env:BUILD_NUMBER" # Get the AssemblyInfo.cs $assemblyInfo = Get-Content -Path .\MyShinyApplication\Properties\AssemblyInfo.cs # Replace last digit of AssemblyVersion $assemblyInfo = $assemblyInfo -replace "^\[assembly: AssemblyVersion\(`"([0-9]+)\.([0-9]+)\.([0-9]+)\.[0-9]+`"\)]", ('[assembly: AssemblyVersion("$1.$2.$3.' + $env:BUILD_NUMBER + '")]') Write-Host ($assemblyInfo -match '^\[assembly: AssemblyVersion') # Replace last digit of AssemblyFileVersion $assemblyInfo = $assemblyInfo -replace "^\[assembly: AssemblyFileVersion\(`"([0-9]+)\.([0-9]+)\.([0-9]+)\.[0-9]+`"\)]", ('[assembly: AssemblyFileVersion("$1.$2.$3.' + $env:BUILD_NUMBER + '")]') Write-Host ($assemblyInfo -match '^\[assembly: AssemblyFileVersion') $assemblyInfo | Set-Content -Path .\MyShinyApplication\Properties\AssemblyInfo.cs -Encoding UTF8 } else { Write-Warning "BUILD_NUMBER is not set." }
{ "language": "en", "url": "https://stackoverflow.com/questions/650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Checklist for IIS 6/ASP.NET Windows Authentication? I've been having trouble getting my ASP.NET application to automatically log users into the Intranet site I'm building. No matter the googling or the experimentation I applied, there is always a login box displayed by IE7. I've got Windows authentication mode set in the Web.config, disabled anonymous access and configured the correct default domain in IIS, but it's still asking the user to log in and, more annoyingly, the user is required to provide the domain too (DOMAIN\auser), which is causing problems with non-technical visitors. Thank Zeus for password remembering functionality. I'm not the network administrator so it's possible that something about Active Directory is set up incorrectly, or it could just be me missing something very simple. Please note that I don't want to impersonate the user, I just need to know that the IPrincipal.Name property matches that of a valid record in my user database, hence authenticating the user to my application. To this end, it would be very useful to have a checklist of all configuration requirements for AD, ASP.NET and IIS to work together in this manner as a reference for debugging and hopefully reducing some user friction. A: It sounds like you've covered all the server-side bases--maybe it's a client issue? I assume your users have integrated authentication enabled in IE7? (Tools -> Internet Options -> Advanced -> Security). This is enabled by default. Also, is your site correctly recognized by IE7 as being in the Local Intranet zone? The IE7 default is to allow automatic logon only in that zone, so users would be prompted if IE thinks your site is on the internet. I believe using a hostname with a dot in it causes IE to place the site into the Internet zone. A: * *Open Active Directory Users and Computers MMC snap in *Expand computers section from TreeView (left side) *Check if the computer is registered in your domain. Also, you have to login with a domain account on that computer, otherwise that authentication box will be shown. A: In IIS, enable annonymous access and allow the web.config to handle user authentication.
{ "language": "en", "url": "https://stackoverflow.com/questions/651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Encrypting Passwords What is the fastest, yet secure way to encrypt passwords (in PHP preferably), and for whichever method you choose, is it portable? In other words, if I later migrate my website to a different server, will my passwords continue to work? The method I am using now, as I was told, is dependent on the exact versions of the libraries installed on the server. A: It should be pointed out that you don't want to encrypt the password, you want to hash it. Encrypted passwords can be decrypted, letting someone see the password. Hashing is a one-way operation so the user's original password is (cryptographically) gone. As for which algorithm you should choose - use the currently accepted standard one: * *SHA-256 And when you hash the user's password, be sure to also hash in some other junk with it. e.g.: * *password: password1 *salt: PasswordSaltDesignedForThisQuestion Append the salt to the user's password: String s = HashStringSHA256("password1PasswordSaltDesignedForThisQuestion"); A: Whatever you do, don't write your own encryption algorithm. Doing this will almost guarantee (unless you're a cryptographer) that there will be a flaw in the algorithm that will make it trivial to crack. A: If you are choosing an encryption method for your login system then speed is not your friend, Jeff had a to-and-frow with Thomas Ptacek about passwords and the conclusion was that you should use the slowest, most secure encryption method you can afford to. From Thomas Ptacek's blog: Speed is exactly what you don’t want in a password hash function. Modern password schemes are attacked with incremental password crackers. Incremental crackers don’t precalculate all possible cracked passwords. They consider each password hash individually, and they feed their dictionary through the password hash function the same way your PHP login page would. Rainbow table crackers like Ophcrack use space to attack passwords; incremental crackers like John the Ripper, Crack, and LC5 work with time: statistics and compute. The password attack game is scored in time taken to crack password X. With rainbow tables, that time depends on how big your table needs to be and how fast you can search it. With incremental crackers, the time depends on how fast you can make the password hash function run. The better you can optimize your password hash function, the faster your password hash function gets, the weaker your scheme is. MD5 and SHA1, even conventional block ciphers like DES, are designed to be fast. MD5, SHA1, and DES are weak password hashes. On modern CPUs, raw crypto building blocks like DES and MD5 can be bitsliced, vectorized, and parallelized to make password searches lightning fast. Game-over FPGA implementations cost only hundreds of dollars. A: I'm with Peter. Developer don't seem to understand passwords. We all pick (and I'm guilty of this too) MD5 or SHA1 because they are fast. Thinking about it ('cuz someone recently pointed it out to me) that doesn't make any sense. We should be picking a hashing algorithm that's stupid slow. I mean, on the scale of things, a busy site will hash passwords what? every 1/2 minute? Who cares if it take 0.8 seconds vs 0.03 seconds server wise? But that extra slowness is huge to prevent all types of common brute-forcish attacks. From my reading, bcrypt is specifically designed for secure password hashing. It's based on blowfish, and there are many implementation. For PHP, check out PHP Pass For anyone doing .NET, check out BCrypt.NET A: I'm not necessarily looking for the fastest but a nice balance, some of the server that this code is being developed for are fairly slow, the script that hashes and stores the password is taking 5-6 seconds to run, and I've narrowed it down to the hashing (if I comment the hashing out it runs, in 1-2 seconds). It doesn't have to be the MOST secure, I'm not codding for a bank (right now) but I certainly WILL NOT store the passwords as plain-text. A: Consider to use bcrypt it is used in many modern frameworks like laravel. A: password_hash ( string $password , int $algo [, array $options ] ). (PHP 5 >= 5.5.0, PHP 7) password_hash() creates a new password hash using a strong one-way hashing algorithm. password_hash() is compatible with crypt(). Therefore, password hashes created by crypt() can be used with password_hash(). A: Use this function when inserting in database Password_harsh($password,PASSWORD_DEFAULT); And when selecting from the database you compare the password you are inserting with the one in the database using the function if(password_verify($password,$databasePassword)){ }else{ echo "password not correct"; } This will harsh the password in a secure format
{ "language": "en", "url": "https://stackoverflow.com/questions/657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Using 'in' to match an attribute of Python objects in an array I don't remember whether I was dreaming or not but I seem to recall there being a function which allowed something like, foo in iter_attr(array of python objects, attribute name) I've looked over the docs but this kind of thing doesn't fall under any obvious listed headers A: No, you were not dreaming. Python has a pretty excellent list comprehension system that lets you manipulate lists pretty elegantly, and depending on exactly what you want to accomplish, this can be done a couple of ways. In essence, what you're doing is saying "For item in list if criteria.matches", and from that you can just iterate through the results or dump the results into a new list. I'm going to crib an example from Dive Into Python here, because it's pretty elegant and they're smarter than I am. Here they're getting a list of files in a directory, then filtering the list for all files that match a regular expression criteria. files = os.listdir(path) test = re.compile("test\.py$", re.IGNORECASE) files = [f for f in files if test.search(f)] You could do this without regular expressions, for your example, for anything where your expression at the end returns true for a match. There are other options like using the filter() function, but if I were going to choose, I'd go with this. Eric Sipple A: The function you are thinking of is probably operator.attrgettter. For example, to get a list that contains the value of each object's "id" attribute: import operator ids = map(operator.attrgetter("id"), bar) If you want to check whether the list contains an object with an id == 12, then a neat and efficient (i.e. doesn't iterate the whole list unnecessarily) way to do it is: any(obj.id == 12 for obj in bar) If you want to use 'in' with attrgetter, while still retaining lazy iteration of the list: import operator,itertools foo = 12 foo in itertools.imap(operator.attrgetter("id"), bar) A: What I was thinking of can be achieved using list comprehensions, but I thought that there was a function that did this in a slightly neater way. i.e. 'bar' is a list of objects, all of which have the attribute 'id' The mythical functional way: foo = 12 foo in iter_attr(bar, 'id') The list comprehension way: foo = 12 foo in [obj.id for obj in bar] In retrospect the list comprehension way is pretty neat anyway. A: Using a list comprehension would build a temporary list, which could eat all your memory if the sequence being searched is large. Even if the sequence is not large, building the list means iterating over the whole of the sequence before in could start its search. The temporary list can be avoiding by using a generator expression: foo = 12 foo in (obj.id for obj in bar) Now, as long as obj.id == 12 near the start of bar, the search will be fast, even if bar is infinitely long. As @Matt suggested, it's a good idea to use hasattr if any of the objects in bar can be missing an id attribute: foo = 12 foo in (obj.id for obj in bar if hasattr(obj, 'id')) A: If you plan on searching anything of remotely decent size, your best bet is going to be to use a dictionary or a set. Otherwise, you basically have to iterate through every element of the iterator until you get to the one you want. If this isn't necessarily performance sensitive code, then the list comprehension way should work. But note that it is fairly inefficient because it goes over every element of the iterator and then goes BACK over it again until it finds what it wants. Remember, python has one of the most efficient hashing algorithms around. Use it to your advantage. A: Are you looking to get a list of objects that have a certain attribute? If so, a list comprehension is the right way to do this. result = [obj for obj in listOfObjs if hasattr(obj, 'attributeName')] A: you could always write one yourself: def iterattr(iterator, attributename): for obj in iterator: yield getattr(obj, attributename) will work with anything that iterates, be it a tuple, list, or whatever. I love python, it makes stuff like this very simple and no more of a hassle than neccessary, and in use stuff like this is hugely elegant. A: I think: #!/bin/python bar in dict(Foo) Is what you are thinking of. When trying to see if a certain key exists within a dictionary in python (python's version of a hash table) there are two ways to check. First is the has_key() method attached to the dictionary and second is the example given above. It will return a boolean value. That should answer your question. And now a little off topic to tie this in to the list comprehension answer previously given (for a bit more clarity). List Comprehensions construct a list from a basic for loop with modifiers. As an example (to clarify slightly), a way to use the in dict language construct in a list comprehension: Say you have a two dimensional dictionary foo and you only want the second dimension dictionaries which contain the key bar. A relatively straightforward way to do so would be to use a list comprehension with a conditional as follows: #!/bin/python baz = dict([(key, value) for key, value in foo if bar in value]) Note the if bar in value at the end of the statement**, this is a modifying clause which tells the list comprehension to only keep those key-value pairs which meet the conditional.** In this case baz is a new dictionary which contains only the dictionaries from foo which contain bar (Hopefully I didn't miss anything in that code example... you may have to take a look at the list comprehension documentation found in docs.python.org tutorials and at secnetix.de, both sites are good references if you have questions in the future.).
{ "language": "en", "url": "https://stackoverflow.com/questions/683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: Connect PHP to IBM i (AS/400) I've got an upcoming project wherein I will need to connect our website (PHP5/Apache 1.3/OpenBSD 4.1) to our back-end system running on an iSeries with OS400 V5R3 so that I can access some tables stored there. I've done some checking around but am running into some roadblocks. From what I've seen the DB2 extensions and DB2 software from IBM only run under Linux. I've tried compiling the extensions with all the software from IBM and even tried their precompiled ibm_db2 extension with no luck. IBM only supports Linux so I turned on the Linux emulation in the kernel but that didn't seem to help anything. If anyone has run across getting everything to run natively under OpenBSD that would be great, but what I think I may have to do is setting up a second server running CentOS with DB2 installed (most likely via ZendCore for IBM since it seems to do all this for me) and the driver so that I can set up a small transaction server that I can post against and get a JSON representation of the DB2 data that I need. Does the second option seem overkill or does anyone else have any better ideas? A: Rather than setup a 2nd box, why don't you look into the PHP Connector for iSeries? My mainframe guys said it was very easy to setup on our iSeries here. We wrote a simple server in PHP that loads data models from DB2 data, serializes them, and returns them to the caller. This approach means that only another PHP app can consume the service but it's just so much quicker on both ends to just serialize the object and send it down the pipe. Here is a PDF from IBM on the subject: http://i-seriesusergroup.org/wp-content/uploads/2006/09/PHP%20for%20i5OS%20NESDND.pdf A: Looks like a web service is going to be the answer for me. On a production box I'd rather not have to go through compiling and maintaining my own special installation of PHP since ODBC support needs to be compiled in, according to the PHP documentation. A: To second @John Downey, I've gotten connectivity to work with PHP on an AS/400 with unixODBC. Check your phpinfo() to see if unixODBC is available in it. I didn't have to compile it in on SLES 10. A: Have you looked at connecting to the server using unixODBC? If I remember correctly it has support for IBM DB2 and compiles on OpenBSD. Check out http://www.php.net/odbc for more information regarding the PHP side. If you can't get that to work, the option to setup a web service on a Linux server may be all you can do. A: A web service is almost certainly the way to go. I'm sure you've already thought of this, but since you're doing PHP on both sides, you can shortcut things a little bit by using serialize() to build your response data instead of building a proper XML document. It's less flexible over the long run, but it will probably get you up and running more quickly. A: Indeed, a webservice seems like a great way to solve the problem. One way to avoid having a completely separate OS for it would be to write the webservice in Java on top of the AS400 tools for Java (which are quite nice, btw). That should at least let you run your service layer on the OpenBSD box as well. A: You can connect directly using a standard ODBC driver as well. The IBM version usually gives you more features like being able to call programs and things like that. If you only need SQL and stored procedures, ODBC should work. A: Why not use PDO from PHP? I have to guess here since I could not find a public list of all ports available for OpenBSD, but since there is a port for FreeBSD, NetBSD etc. maybe you get lucky as well. (I guess that even though OpenBSD links to FreeBSD's porter's handbook, Freshports is not applicable for your system?) If PDO is not available and since I am hoping you use ports and according to the following link there is php5-ODBC available: * *http://www.openbsd.org/pkg-stable40.html So assuming you manage your system through ports, there are your pointers. Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Embedded Database for .net that can run off a network I was (and still am) looking for an embedded database to be used in a .net (c#) application. The caveat: The Application (or at least the database) is stored on a Network drive, but only used by 1 user at a time. Now, my first idea was SQL Server Compact edition. That is really nicely integreated, but it can not run off a network. Firebird seems to have the same issue, but the .net Integration seems to be not really first-class and is largely undocumented. Blackfish SQL looks interesting, but there is no trial of the .net Version. Pricing is also OK. Any other suggestions of something that works well with .net and runs off a network without the need of actually installing a server software? A: It sounds like ADO/Access is perfect for your needs. It's baked into the MS stack, well seasoned, and multi-user. You can programatically create a DB like so: Dim catalog as New ADOX.Catalog Catalog.Create("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\\server\path\to\db.mdb") You can then use standard ADO.NET methods to interact with the database. A: You can use the firebird embeded, it's just a dll that you will need to ship with you app. About things being undocumented, that's not really true, the firebird .NET driver implements the ADO Interfaces, so if you know ADO you can work with Firebird, basically instead of SQLConnection you will use FBConnection and so on, but my advice is to write a data access layer and use just interfaces on your code, something like this: using FirebirdSql.Data.FirebirdClient; public static IDbConnection MyConnection() { FbConnection cn = new FbConnection("..."); return cn; } This example is very simple, but you will not need much more than that. We use firebird for our all application without any problems, you should at least try it out. A: Check out VistaDB. They have a very good product, the server version (3.4) is in Beta and is very close to release. A: A little late to the post here.. And VistaDB is already mentioned, but I wanted to point out that VistaDB is 100% managed (since your post was tagged .net). It can run from a shared network drive, and is 1MB xcopy deployed. Since you mention SQL CE, we also support T-SQL Syntax and datatypes (in fact more than SQL CE) and have updateable views, TSQL Procs and other things missing in SQL CE. A: Why not use SQL Server 2005 Express edition? It really depends on what you mean by "embedded" - but you can redistribute SQLServer2005E with your applications and the user never has to know it's there. Embedding SQL Server Express in Applications Embedding SQL Server Express into Custom Applications A: I'm puzzled. You're asking for an embeded database - where the database itself is stored on the server. that translates to storing the data file on a network share. You then say that SQL Compact Edition won't work... except that if one looks at this document: Word Document: Choosing Between SQL Server 2005 Compact Edition and SQL Server 2005 Express Edition On page 8 you have a nice big green tick next to "Data file storage on a network share". So it seems to me that your first thought was the right one. A: SQLite came to my mind while reading your question, and I'm quite sure that it's possible to access it from a network drive if you keep yourself to the constraint of 1 user at a time. SQLite on .NET - Get up and running in 3 minutes A: There's also Valentina. I cam e across this product when I was working on some Real Basic project. The RB version is very good. A: Have you considered an OODB? From the various open sources alternatives I recommend db4o (sorry for the self promotion :)) which can run either embedded or in client/server mode. Best Adriano A: I'd recommend Advantage Database Server (www.advantagedatabase.com). It's a mature embedded DB with great support and accessible from many development languages in addition to .NET. The "local" version is free, runs within your application in the form of a DLL, requires no installation on the server/network share, and supports all major DB features. You can store the DB and/or application files all on the network; it doesn't care where the data is. Disclaimer: I am an engineer in the ADS R&D group. I promise, it rocks :) A: This question is now ancient, and a lot has changed. For my specific purposes, LiteDB is the option of choice. It's open source and has a GitHub Repository. Apart from that, SQLite is basically the industry standard for embedded databases. There are attempts to port the code to .NET, but the prime use case involves a native library (e.g., the sqlite Nuget package) and/or a .NET P/Invoke wrapper like Microsoft.Data.SQLite.
{ "language": "en", "url": "https://stackoverflow.com/questions/705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: .NET testing framework advice I'm looking to introduce a unit testing framework into the mix at my job. We're using Visual Studio 2005 (though we may be moving to 2008 within the next six months) and work primarily in C#. If the framework has some kind of IDE integration that would be best, but I'm open to frameworks that don't have integration but are still relatively simple to get set up. I'm going to get resistance to it one way or another, so if I can make sure what I'm pushing isn't a pain in the neck, that would help my case. The obvious choice from the research I've done so far points to NUnit, but I'd like to get the impressions of someone who's actually used it before recommending it to my team. Has anyone out there used NUnit? If so, are there any pitfalls or limitations of which I should be aware? Are there other good options out there? If so, if you've used both NUnit at that, I'd greatly appreciate an idea of the strengths and weaknesses of them. A: Visual Studio 2008 has a built-in test project type that works in a similar way to NUnit, but obviously has much tighter integration with Visual Studio (can run on every build and shows the results in a similar way to the conversion results page when upgrading solution files), but it is obviously not as mature as NUnit as it's pretty new and I'm not sure about how it handles mocking. But it would be worth looking into when your team moves to Visual Studio 2008. A: The built-in unit testing in Visual Studio 2008 is all right, but its difficult to integrate with CruiseControl.NET, certainly a lot harder than normal NUnit. So go with NUnit if you plan to have nice automated tests. A: We've been using xUnit.net. It seems to combine all the best of NUnit, MbUnit, and MSTest. A: When I started unit testing, I started with NUnit as it is simple to set up and use. Currently I am using the built-in test runner that comes with ReSharper. That way, I can easily flip between code and test results. Incidentally, NUnit detects when you have compiled your code, so you do not need to do any refresh in NUnit. ReSharper automatically does a build when you choose to run a specific test. A: Try also the PEX tool. It's Microsoft's own, probably soon to be integrated into VSTS. It does support NUnit, MbUnit and xUnit.net. I also use a small console application for testing one class or a small library. You could copy paste the code from here. A: VSTT 2010 (Visual Studio Team System Test) should be a good bet if you are looking for functional test automation. Web services testing, UI testing, BizTalk testing and data-driven testing support. Please look at VSTT. A: I think NUnit is your best bet. With TestDriven.NET, you get great integration within Visual Studio. (ReSharper also has a unit test runner if you're using it). NUnit is simple to use and follows an established paradigm. You'll also find plenty of projects, tutorials, and guides using it which always helps. Your other main choice is probably MbUnit, which is more and more positioning itself as the BDD framework of choice (in conjunction with Gallio). A: MbUnit is worth a look. It has a set of features comparable to NUnit. It has its own GUI, or can be integrated into Visual Studio if you have ReSharper. I would also recommend Rhino Mocks if you are doing any sort of TDD. A: I would say MbUnit also. I like being able to run a single test many times just by specifying inputs and the result is right above the test function. It is a horrible description of what I mean, so here is a link that shows you what I mean. A: Scott Hanselman had a good podcast about this, entitled: "The Past, Present and Future of .NET Unit Testing Frameworks" : Hanselminutes #112
{ "language": "en", "url": "https://stackoverflow.com/questions/709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Why doesn't VFP .NET OLEdb provider work in 64 bit Windows? I wrote a windows service using VB that read some legacy data from Visual Foxpro Databases to be inserted in SQL 2005. The problem is this use to run fine in Windows server 2003 32-Bits, but the client recently moved to Windows 2003 64-Bits and now the service won't work. I'm getting a message the the VFP .NET OLEdb provider is not found. I researched and everything seems to point out that there is no solution. Any Help, please... A: You'll need to compile with the target CPU set to x86 to force your code to use the 32 bit version of the VFP OLE Db provider. Microsoft has stated that there are no plans on releasing a 64-bit edition of the Visual FoxPro OLE Db provider. For what's worth, Microsoft has also stated that VFP 9 is the final version of Visual FoxPro and support will end in 2015. If you need the OLE DB provider for VFP 9, you can get it here. A: Have you tried changing the target CPU to x86 instead of "Any CPU" in the advanced compiler options? I know that this solves some problems with other OLEDB providers by forcing the use of the 32-bit version. A: Sybase Anywhere has a OLEDB provider for VFP tables. It states in the page that the server supports 64 bit Windows, don't know about the OLEDB provider: Support 64-bit Windows and Linux Servers In order to further enhance scalability, support for the x86_64 architecture was added to the Advantage Database Servers for Windows and Linux. On computers with an x86_64 processor and a 64 bit Operating System the Advantage Database Server will now be able to use memory in excess of 4GB. The extra memory will allow more users to access the server concurrently and increase the amount of information the server can cache when processing queries. I didn't try it by myself, but some people of the VFP newsgroups reports that it works OK. Link to the Advantage Server / VFP Page
{ "language": "en", "url": "https://stackoverflow.com/questions/717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Class views in Django Django view points to a function, which can be a problem if you want to change only a bit of functionality. Yes, I could have million keyword arguments and even more if statements in the function, but I was thinking more of an object oriented approach. For example, I have a page that displays a user. This page is very similar to page that displays a group, but it's still not so similar to just use another data model. Group also has members etc... One way would be to point views to class methods and then extend that class. Has anyone tried this approach or has any other idea? A: If you're simply displaying data from models, why not use the Django Generic Views? They're designed to let you easy show data from a model without having to write your own view and stuff about mapping URL paramaters to views, fetching data, handling edge cases, rendering output, etc. A: I've created and used my own generic view classes, defining __call__ so an instance of the class is callable. I really like it; while Django's generic views allow some customization through keyword arguments, OO generic views (if their behavior is split into a number of separate methods) can have much more fine-grained customization via subclassing, which lets me repeat myself a lot less. (I get tired of rewriting the same create/update view logic anytime I need to tweak something Django's generic views don't quite allow). I've posted some code at djangosnippets.org. The only real downside I see is the proliferation of internal method calls, which may impact performance somewhat. I don't think this is much of a concern; it's rare that Python code execution would be your performance bottleneck in a web app. UPDATE: Django's own generic views are now class-based. UPDATE: FWIW, I've changed my opinion on class-based views since this answer was written. After having used them extensively on a couple of projects, I feel they tend to lead to code that is satisfyingly DRY to write, but very hard to read and maintain later, because functionality is spread across so many different places, and subclasses are so dependent on every implementation detail of the superclasses and mixins. I now feel that TemplateResponse and view decorators is a better answer for decomposing view code. A: You can always create a class, override the __call__ function and then point the URL file to an instance of the class. You can take a look at the FormWizard class to see how this is done. A: Unless you want to do something a little complex, using the generic views are the way to go. They are far more powerful than their name implies, and if you are just displaying model data generic views will do the job. A: I needed to use class based views, but I wanted to be able to use the full name of the class in my URLconf without always having to instantiate the view class before using it. What helped me was a surprisingly simple metaclass: class CallableViewClass(type): def __call__(cls, *args, **kwargs): if args and isinstance(args[0], HttpRequest): instance = super(CallableViewClass, cls).__call__() return instance.__call__(*args, **kwargs) else: instance = super(CallableViewClass, cls).__call__(*args, **kwargs) return instance class View(object): __metaclass__ = CallableViewClass def __call__(self, request, *args, **kwargs): if hasattr(self, request.method): handler = getattr(self, request.method) if hasattr(handler, '__call__'): return handler(request, *args, **kwargs) return HttpResponseBadRequest('Method Not Allowed', status=405) I can now both instantiate view classes and use the instances as view functions, OR I can simply point my URLconf to my class and have the metaclass instantiate (and call) the view class for me. This works by checking the first argument to __call__ – if it's a HttpRequest, it must be an actual HTTP request because it would be nonsense to attept to instantiate a view class with an HttpRequest instance. class MyView(View): def __init__(self, arg=None): self.arg = arg def GET(request): return HttpResponse(self.arg or 'no args provided') @login_required class MyOtherView(View): def POST(request): pass # And all the following work as expected. urlpatterns = patterns('' url(r'^myview1$', 'myapp.views.MyView', name='myview1'), url(r'^myview2$', myapp.views.MyView, name='myview2'), url(r'^myview3$', myapp.views.MyView('foobar'), name='myview3'), url(r'^myotherview$', 'myapp.views.MyOtherView', name='otherview'), ) (I posted a snippet for this at http://djangosnippets.org/snippets/2041/) A: Sounds to me like you're trying to combine things that shouldn't be combined. If you need to do different processing in your view depending on if it's a User or Group object you're trying to look at then you should use two different view functions. On the other hand there can be common idioms you'd want to extract out of your object_detail type views... perhaps you could use a decorator or just helper functions? -Dan A: If you want to share common functionality between pages I suggest you look at custom tags. They're quite easy to create, and are very powerful. Also, templates can extend from other templates. This allows you to have a base template to set up the layout of the page and to share this between other templates which fill in the blanks. You can nest templates to any depth; allowing you to specify the layout on separate groups of related pages in one place. A: Generic views will usually be the way to go, but ultimately you're free to handle URLs however you want. FormWizard does things in a class-based way, as do some apps for RESTful APIs. Basically with a URL you are given a bunch of variables and place to provide a callable, what callable you provide is completely up to you - the standard way is to provide a function - but ultimately Django puts no restrictions on what you do. I do agree that a few more examples of how to do this would be good, FormWizard is probably the place to start though. A: You can use the Django Generic Views. You can easily achieve desired functionality thorough Django generic Views
{ "language": "en", "url": "https://stackoverflow.com/questions/742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Format string to title case How do I format a string to title case? A: To capatilise it in, say, C - use the ascii codes (http://www.asciitable.com/) to find the integer value of the char and subtract 32 from it. This is a poor solution if you ever plan to accept characters beyond a-z and A-Z. For instance: ASCII 134: å, ASCII 143: Å. Using arithmetic gets you: ASCII 102: f Use library calls, don't assume you can use integer arithmetic on your characters to get back something useful. Unicode is tricky. A: In Silverlight there is no ToTitleCase in the TextInfo class. Here's a simple regex based way. Note: Silverlight doesn't have precompiled regexes, but for me this performance loss is not an issue. public string TitleCase(string str) { return Regex.Replace(str, @"\w+", (m) => { string tmp = m.Value; return char.ToUpper(tmp[0]) + tmp.Substring(1, tmp.Length - 1).ToLower(); }); } A: In what language? In PHP it is: ucwords() example: $HelloWorld = ucwords('hello world'); A: If the language you are using has a supported method/function then just use that (as in the C# ToTitleCase method) If it does not, then you will want to do something like the following: * *Read in the string *Take the first word *Capitalize the first letter of that word 1 *Go forward and find the next word *Go to 3 if not at the end of the string, otherwise exit 1 To capitalize it in, say, C - use the ascii codes to find the integer value of the char and subtract 32 from it. There would need to be much more error checking in the code (ensuring valid letters etc.), and the "Capitalize" function will need to impose some sort of "title-case scheme" on the letters to check for words that do not need to be capatilised ('and', 'but' etc. Here is a good scheme) A: In Perl: $string =~ s/(\w+)/\u\L$1/g; That's even in the FAQ. A: In Java, you can use the following code. public String titleCase(String str) { char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { if (i == 0) { chars[i] = Character.toUpperCase(chars[i]); } else if ((i + 1) < chars.length && chars[i] == ' ') { chars[i + 1] = Character.toUpperCase(chars[i + 1]); } } return new String(chars); } A: Excel-like PROPER: public static string ExcelProper(string s) { bool upper_needed = true; string result = ""; foreach (char c in s) { bool is_letter = Char.IsLetter(c); if (is_letter) if (upper_needed) result += Char.ToUpper(c); else result += Char.ToLower(c); else result += c; upper_needed = !is_letter; } return result; } A: http://titlecase.com/ has an API A: Here is a simple static method to do this in C#: public static string ToTitleCaseInvariant(string targetString) { return System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo.ToTitleCase(targetString); } A: I would be wary of automatically upcasing all whitespace-preceded-words in scenarios where I would run the risk of attracting the fury of nitpickers. I would at least consider implementing a dictionary for exception cases like articles and conjunctions. Behold: "Beauty and the Beast" And when it comes to proper nouns, the thing gets much uglier. A: Here's a Perl solution http://daringfireball.net/2008/05/title_case Here's a Ruby solution http://frankschmitt.org/projects/title-case Here's a Ruby one-liner solution: http://snippets.dzone.com/posts/show/4702 'some string here'.gsub(/\b\w/){$&.upcase} What the one-liner is doing is using a regular expression substitution of the first character of each word with the uppercase version of it. A: I think using the CultureInfo is not always reliable, this the simple and handy way to manipulate string manually: string sourceName = txtTextBox.Text.ToLower(); string destinationName = sourceName[0].ToUpper(); for (int i = 0; i < (sourceName.Length - 1); i++) { if (sourceName[i + 1] == "") { destinationName += sourceName[i + 1]; } else { destinationName += sourceName[i + 1]; } } txtTextBox.Text = desinationName; A: There is a built-in formula PROPER(n) in Excel. Was quite pleased to see I didn't have to write it myself! A: Here's an implementation in Python: https://launchpad.net/titlecase.py And a port of this implementation that I've just done in C++: http://codepad.org/RrfcsZzO A: Here is a simple example of how to do it : public static string ToTitleCaseInvariant(string str) { return System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo.ToTitleCase(str); } A: In C# using System.Globalization; using System.Threading; protected void Page_Load(object sender, EventArgs e) { CultureInfo cultureInfo = Thread.CurrentThread.CurrentCulture; TextInfo textInfo = cultureInfo.TextInfo; Response.Write(textInfo.ToTitleCase("WelcometoHome<br />")); Response.Write(textInfo.ToTitleCase("Welcome to Home")); Response.Write(textInfo.ToTitleCase("Welcome@to$home<br/>").Replace("@","").Replace("$", "")); } A: In C# you can simply use CultureInfo.InvariantCulture.TextInfo.ToTitleCase(str.ToLowerInvariant()) * *Invariant *Works with uppercase strings A: Without using a ready-made function, a super-simple low-level algorithm to convert a string to title case: convert first character to uppercase. for each character in string, if the previous character is whitespace, convert character to uppercase. This asssumes the "convert character to uppercase" will do that correctly regardless of whether or not the character is case-sensitive (e.g., '+'). A: With perl you could do this: my $tc_string = join ' ', map { ucfirst($\_) } split /\s+/, $string; A: Here you have a C++ version. It's got a set of non uppercaseable words like prononuns and prepositions. However, I would not recommend automating this process if you are to deal with important texts. #include <iostream> #include <string> #include <vector> #include <cctype> #include <set> using namespace std; typedef vector<pair<string, int> > subDivision; set<string> nonUpperCaseAble; subDivision split(string & cadena, string delim = " "){ subDivision retorno; int pos, inic = 0; while((pos = cadena.find_first_of(delim, inic)) != cadena.npos){ if(pos-inic > 0){ retorno.push_back(make_pair(cadena.substr(inic, pos-inic), inic)); } inic = pos+1; } if(inic != cadena.length()){ retorno.push_back(make_pair(cadena.substr(inic, cadena.length() - inic), inic)); } return retorno; } string firstUpper (string & pal){ pal[0] = toupper(pal[0]); return pal; } int main() { nonUpperCaseAble.insert("the"); nonUpperCaseAble.insert("of"); nonUpperCaseAble.insert("in"); // ... string linea, resultado; cout << "Type the line you want to convert: " << endl; getline(cin, linea); subDivision trozos = split(linea); for(int i = 0; i < trozos.size(); i++){ if(trozos[i].second == 0) { resultado += firstUpper(trozos[i].first); } else if (linea[trozos[i].second-1] == ' ') { if(nonUpperCaseAble.find(trozos[i].first) == nonUpperCaseAble.end()) { resultado += " " + firstUpper(trozos[i].first); }else{ resultado += " " + trozos[i].first; } } else { resultado += trozos[i].first; } } cout << resultado << endl; getchar(); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: How to create a new object instance from a Type One may not always know the Type of an object at compile-time, but may need to create an instance of the Type. How do you get a new object instance from a Type? A: public AbstractType New { get { return (AbstractType) Activator.CreateInstance(GetType()); } } A: I can across this question because I was looking to implement a simple CloneObject method for arbitrary class (with a default constructor) With generic method you can require that the type implements New(). Public Function CloneObject(Of T As New)(ByVal src As T) As T Dim result As T = Nothing Dim cloneable = TryCast(src, ICloneable) If cloneable IsNot Nothing Then result = cloneable.Clone() Else result = New T CopySimpleProperties(src, result, Nothing, "clone") End If Return result End Function With non-generic assume the type has a default constructor and catch an exception if it doesn't. Public Function CloneObject(ByVal src As Object) As Object Dim result As Object = Nothing Dim cloneable As ICloneable Try cloneable = TryCast(src, ICloneable) If cloneable IsNot Nothing Then result = cloneable.Clone() Else result = Activator.CreateInstance(src.GetType()) CopySimpleProperties(src, result, Nothing, "clone") End If Catch ex As Exception Trace.WriteLine("!!! CloneObject(): " & ex.Message) End Try Return result End Function A: Its pretty simple. Assume that your classname is Car and the namespace is Vehicles, then pass the parameter as Vehicles.Car which returns object of type Car. Like this you can create any instance of any class dynamically. public object GetInstance(string strNamesapace) { Type t = Type.GetType(strNamesapace); return Activator.CreateInstance(t); } If your Fully Qualified Name(ie, Vehicles.Car in this case) is in another assembly, the Type.GetType will be null. In such cases, you have loop through all assemblies and find the Type. For that you can use the below code public object GetInstance(string strFullyQualifiedName) { Type type = Type.GetType(strFullyQualifiedName); if (type != null) return Activator.CreateInstance(type); foreach (var asm in AppDomain.CurrentDomain.GetAssemblies()) { type = asm.GetType(strFullyQualifiedName); if (type != null) return Activator.CreateInstance(type); } return null; } And you can get the instance by calling the above method. object objClassInstance = GetInstance("Vehicles.Car"); A: Without use of Reflection: private T Create<T>() where T : class, new() { return new T(); } A: The answer was already given: ObjectType instance = (ObjectType)Activator.CreateInstance(objectType); However, the Activator class has a generic variant for the parameterless constructor that makes this slightly more readable by making the cast unnecessary and not needing to pass the runtime type of the object: ObjectType instance = Activator.CreateInstance<ObjectType>(); A: Compiled expression is best way! (for performance to repeatedly create instance in runtime). static readonly Func<X> YCreator = Expression.Lambda<Func<X>>( Expression.New(typeof(Y).GetConstructor(Type.EmptyTypes)) ).Compile(); X x = YCreator(); Statistics (2012): Iterations: 5000000 00:00:00.8481762, Activator.CreateInstance(string, string) 00:00:00.8416930, Activator.CreateInstance(type) 00:00:06.6236752, ConstructorInfo.Invoke 00:00:00.1776255, Compiled expression 00:00:00.0462197, new Statistics (2015, .net 4.5, x64): Iterations: 5000000 00:00:00.2659981, Activator.CreateInstance(string, string) 00:00:00.2603770, Activator.CreateInstance(type) 00:00:00.7478936, ConstructorInfo.Invoke 00:00:00.0700757, Compiled expression 00:00:00.0286710, new Statistics (2015, .net 4.5, x86): Iterations: 5000000 00:00:00.3541501, Activator.CreateInstance(string, string) 00:00:00.3686861, Activator.CreateInstance(type) 00:00:00.9492354, ConstructorInfo.Invoke 00:00:00.0719072, Compiled expression 00:00:00.0229387, new Statistics (2017, LINQPad 5.22.02/x64/.NET 4.6): Iterations: 5000000 No args 00:00:00.3897563, Activator.CreateInstance(string assemblyName, string typeName) 00:00:00.3500748, Activator.CreateInstance(Type type) 00:00:01.0100714, ConstructorInfo.Invoke 00:00:00.1375767, Compiled expression 00:00:00.1337920, Compiled expression (type) 00:00:00.0593664, new Single arg 00:00:03.9300630, Activator.CreateInstance(Type type) 00:00:01.3881770, ConstructorInfo.Invoke 00:00:00.1425534, Compiled expression 00:00:00.0717409, new Statistics (2019, x64/.NET 4.8): Iterations: 5000000 No args 00:00:00.3287835, Activator.CreateInstance(string assemblyName, string typeName) 00:00:00.3122015, Activator.CreateInstance(Type type) 00:00:00.8035712, ConstructorInfo.Invoke 00:00:00.0692854, Compiled expression 00:00:00.0662223, Compiled expression (type) 00:00:00.0337862, new Single arg 00:00:03.8081959, Activator.CreateInstance(Type type) 00:00:01.2507642, ConstructorInfo.Invoke 00:00:00.0671756, Compiled expression 00:00:00.0301489, new Statistics (2019, x64/.NET Core 3.0): Iterations: 5000000 No args 00:00:00.3226895, Activator.CreateInstance(string assemblyName, string typeName) 00:00:00.2786803, Activator.CreateInstance(Type type) 00:00:00.6183554, ConstructorInfo.Invoke 00:00:00.0483217, Compiled expression 00:00:00.0485119, Compiled expression (type) 00:00:00.0434534, new Single arg 00:00:03.4389401, Activator.CreateInstance(Type type) 00:00:01.0803609, ConstructorInfo.Invoke 00:00:00.0554756, Compiled expression 00:00:00.0462232, new Full code: static X CreateY_New() { return new Y(); } static X CreateY_New_Arg(int z) { return new Y(z); } static X CreateY_CreateInstance() { return (X)Activator.CreateInstance(typeof(Y)); } static X CreateY_CreateInstance_String() { return (X)Activator.CreateInstance("Program", "Y").Unwrap(); } static X CreateY_CreateInstance_Arg(int z) { return (X)Activator.CreateInstance(typeof(Y), new object[] { z, }); } private static readonly System.Reflection.ConstructorInfo YConstructor = typeof(Y).GetConstructor(Type.EmptyTypes); private static readonly object[] Empty = new object[] { }; static X CreateY_Invoke() { return (X)YConstructor.Invoke(Empty); } private static readonly System.Reflection.ConstructorInfo YConstructor_Arg = typeof(Y).GetConstructor(new[] { typeof(int), }); static X CreateY_Invoke_Arg(int z) { return (X)YConstructor_Arg.Invoke(new object[] { z, }); } private static readonly Func<X> YCreator = Expression.Lambda<Func<X>>( Expression.New(typeof(Y).GetConstructor(Type.EmptyTypes)) ).Compile(); static X CreateY_CompiledExpression() { return YCreator(); } private static readonly Func<X> YCreator_Type = Expression.Lambda<Func<X>>( Expression.New(typeof(Y)) ).Compile(); static X CreateY_CompiledExpression_Type() { return YCreator_Type(); } private static readonly ParameterExpression YCreator_Arg_Param = Expression.Parameter(typeof(int), "z"); private static readonly Func<int, X> YCreator_Arg = Expression.Lambda<Func<int, X>>( Expression.New(typeof(Y).GetConstructor(new[] { typeof(int), }), new[] { YCreator_Arg_Param, }), YCreator_Arg_Param ).Compile(); static X CreateY_CompiledExpression_Arg(int z) { return YCreator_Arg(z); } static void Main(string[] args) { const int iterations = 5000000; Console.WriteLine("Iterations: {0}", iterations); Console.WriteLine("No args"); foreach (var creatorInfo in new[] { new {Name = "Activator.CreateInstance(string assemblyName, string typeName)", Creator = (Func<X>)CreateY_CreateInstance}, new {Name = "Activator.CreateInstance(Type type)", Creator = (Func<X>)CreateY_CreateInstance}, new {Name = "ConstructorInfo.Invoke", Creator = (Func<X>)CreateY_Invoke}, new {Name = "Compiled expression", Creator = (Func<X>)CreateY_CompiledExpression}, new {Name = "Compiled expression (type)", Creator = (Func<X>)CreateY_CompiledExpression_Type}, new {Name = "new", Creator = (Func<X>)CreateY_New}, }) { var creator = creatorInfo.Creator; var sum = 0; for (var i = 0; i < 1000; i++) sum += creator().Z; var stopwatch = new Stopwatch(); stopwatch.Start(); for (var i = 0; i < iterations; ++i) { var x = creator(); sum += x.Z; } stopwatch.Stop(); Console.WriteLine("{0}, {1}", stopwatch.Elapsed, creatorInfo.Name); } Console.WriteLine("Single arg"); foreach (var creatorInfo in new[] { new {Name = "Activator.CreateInstance(Type type)", Creator = (Func<int, X>)CreateY_CreateInstance_Arg}, new {Name = "ConstructorInfo.Invoke", Creator = (Func<int, X>)CreateY_Invoke_Arg}, new {Name = "Compiled expression", Creator = (Func<int, X>)CreateY_CompiledExpression_Arg}, new {Name = "new", Creator = (Func<int, X>)CreateY_New_Arg}, }) { var creator = creatorInfo.Creator; var sum = 0; for (var i = 0; i < 1000; i++) sum += creator(i).Z; var stopwatch = new Stopwatch(); stopwatch.Start(); for (var i = 0; i < iterations; ++i) { var x = creator(i); sum += x.Z; } stopwatch.Stop(); Console.WriteLine("{0}, {1}", stopwatch.Elapsed, creatorInfo.Name); } } public class X { public X() { } public X(int z) { this.Z = z; } public int Z; } public class Y : X { public Y() {} public Y(int z) : base(z) {} } A: If this is for something that will be called a lot in an application instance, it's a lot faster to compile and cache dynamic code instead of using the activator or ConstructorInfo.Invoke(). Two easy options for dynamic compilation are compiled Linq Expressions or some simple IL opcodes and DynamicMethod. Either way, the difference is huge when you start getting into tight loops or multiple calls. A: If you want to use the default constructor then the solution using System.Activator presented earlier is probably the most convenient. However, if the type lacks a default constructor or you have to use a non-default one, then an option is to use reflection or System.ComponentModel.TypeDescriptor. In case of reflection, it is enough to know just the type name (with its namespace). Example using reflection: ObjectType instance = (ObjectType)System.Reflection.Assembly.GetExecutingAssembly().CreateInstance( typeName: objectType.FulName, // string including namespace of the type ignoreCase: false, bindingAttr: BindingFlags.Default, binder: null, // use default binder args: new object[] { args, to, constructor }, culture: null, // use CultureInfo from current thread activationAttributes: null ); Example using TypeDescriptor: ObjectType instance = (ObjectType)System.ComponentModel.TypeDescriptor.CreateInstance( provider: null, // use standard type description provider, which uses reflection objectType: objectType, argTypes: new Type[] { types, of, args }, args: new object[] { args, to, constructor } ); A: Given this problem the Activator will work when there is a parameterless ctor. If this is a constraint consider using System.Runtime.Serialization.FormatterServices.GetSafeUninitializedObject() A: Wouldn't the generic T t = new T(); work? A: The Activator class within the root System namespace is pretty powerful. There are a lot of overloads for passing parameters to the constructor and such. Check out the documentation at: http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx or (new path) https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance Here are some simple examples: ObjectType instance = (ObjectType)Activator.CreateInstance(objectType); ObjectType instance = (ObjectType)Activator.CreateInstance("MyAssembly","MyNamespace.ObjectType");
{ "language": "en", "url": "https://stackoverflow.com/questions/752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "896" }
Q: Localising date format descriptors What is the best way to localise a date format descriptor? As anyone from a culture which does not use the mm/dd/yyyy format knows, it is annoying to have to enter dates in this format. The .NET framework provides some very good localisation support, so it's trivial to parse dates according to the users culture, but you often want to also display a helpful hint as to the format required (especially to distinguish between yy and yyyy which is interchangeable in most cultures). What is the best way to do this in a way that make sense to most users (e.g. dd/M/yyy is confusing because of the change in case and the switching between one and two letters). A: Just use ISO-8601. It's an international standard. Date and time (current at page generation) expressed according to ISO 8601: Date: 2014-07-05 Combined date and time in UTC: 2014-07-05T04:00:25+00:00 2014-07-05T04:00:25Z Week: 2014-W27 Date with week number: 2014-W27-6 Ordinal date: 2014-186 A: I have to agree with the OP 'wrong' dates really jar with my DD/MM/YYYY upbringing and I find ISO 8601 dates and times extremely easy to work with. For once the standard got it right and engtech has the obvious answer that doesn't require localisation. I was going to report the birthday input form on stack overflow as a bug because of how much of a sore thumb it is to the majority of the world. A: Here is my current method. Any suggestions? Regex singleMToDoubleRegex = new Regex("(?<!m)m(?!m)"); Regex singleDToDoubleRegex = new Regex("(?<!d)d(?!d)"); CultureInfo currentCulture = CultureInfo.CurrentUICulture; // If the culture is netural there is no date pattern to use, so use the default. if (currentCulture.IsNeutralCulture) { currentCulture = CultureInfo.InvariantCulture; } // Massage the format into a more general user friendly form. string shortDatePattern = CultureInfo.CurrentUICulture.DateTimeFormat.ShortDatePattern.ToLower(); shortDatePattern = singleMToDoubleRegex.Replace(shortDatePattern, "mm"); shortDatePattern = singleDToDoubleRegex.Replace(shortDatePattern, "dd"); A: The trouble with international standards is that pretty much noone uses them. I try where I can, but I am forced to use dd/mm/yyyy almost everywhere in real life, which means I am so used to it it's always a conscious process to use ISO-8601. For the majority of people who don't even try to use ISO-8601 it's even worse. If you can internationalize where you can, I think it's a great advantage. A: How about giving the format (mm/dd/yyyy or dd/mm/yyyy) followed by a printout of today's date in the user's culture. MSDN has an article on formatting a DateTime for the person's culture, using the CultureInfo object that might be helpful in doing this. A combination of the format (which most people are familiar with) combined with the current date represented in that format should be enough of a clue to the person on how they should enter the date. (Also include a calendar control for those who still cant figure it out). A: A short form is convenient and helps avoid spelling mistakes. Localize as applicable, but be sure to display the expected format (do not leave the user blind). Provide a date-picker control as an optional aide to filling in the field. As an extra, on-the-fly parsing and display of the date in long form might help too. A: Best option: I would instead recommend to use a standard date picker. Alternative: every time the content of the edit control changes, parse it and display (in a separate control?) the long format of the date (ie: input "03/04/09" display "Your input: March 4, 2009")
{ "language": "en", "url": "https://stackoverflow.com/questions/761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Python and MySQL I can get Python to work with Postgresql but I cannot get it to work with MySQL. The main problem is that on the shared hosting account I have I do not have the ability to install things such as Django or PySQL, I generally fail when installing them on my computer so maybe it's good I can't install on the host. I found bpgsql really good because it does not require an install, it's a single file that I can look at, read and then call the functions of. Does anybody know of something like this for MySQL? A: I don't have any experience with http://www.SiteGround.com as a web host personally. This is just a guess, but it's common for a shared host to support Python and MySQL with the MySQLdb module (e.g., GoDaddy does this). Try the following CGI script to see if MySQLdb is installed. #!/usr/bin/python module_name = 'MySQLdb' head = '''Content-Type: text/html %s is ''' % module_name try: __import__(module_name) print head + 'installed' except ImportError: print head + 'not installed' A: I uploaded it and got an internal error Premature end of script headers After much playing around, I found that if I had import cgi import cgitb; cgitb.enable() import MySQLdb It would give me a much more useful answer and say that it was not installed, you can see it yourself -> http://woarl.com/db.py Oddly enough, this would produce an error import MySQLdb import cgi import cgitb; cgitb.enable() I looked at some of the other files I had up there and it seems that library was one of the ones I had already tried. A: You could try setting up your own python installation using Virtual Python. Check out how to setup Django using it here. That was written a long time ago, but it shows how I got MySQLdb setup without having root access or anything like it. Once you've got the basics going, you can install any python library you want. A: You really want MySQLdb for any MySQL + Python code. However, you shouldn't need root access or anything to use it. You can build/install it in a user directory (~/lib/python2.x/site-packages), and just add that to your PYTHON_PATH env variable. This should work for just about any python library. Give it a shot, there really isn't a good alternative. A: MySQLdb is what I have used before. If you host is using Python version 2.5 or higher, support for sqlite3 databases is built in (sqlite allows you to have a relational database that is simply a file in your filesystem). But buyer beware, sqlite is not suited for production, so it may depend what you are trying to do with it. Another option may be to call your host and complain, or change hosts. Honestly these days, any self respecting web host that supports python and mysql ought to have MySQLdb pre installed. A: Take a pick at https://docs.djangoproject.com/en/1.8/ref/databases/ MySQLdb is mostly used driver, but if you are using python3 and django 1.8.x that will not work, then you should use mysqlclient that is a folk of MySQLdb on the following link https://pypi.python.org/pypi/mysqlclient
{ "language": "en", "url": "https://stackoverflow.com/questions/766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Solving a linear equation I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++. Here's an example of the equations: -44.3940 = a * 50.0 + b * 37.0 + tx -45.3049 = a * 43.0 + b * 39.0 + tx -44.9594 = a * 52.0 + b * 41.0 + tx From this, I'd like to get the best approximation for a, b, and tx. A: For a 3x3 system of linear equations I guess it would be okay to roll out your own algorithms. However, you might have to worry about accuracy, division by zero or really small numbers and what to do about infinitely many solutions. My suggestion is to go with a standard numerical linear algebra package such as LAPACK. A: Take a look at the Microsoft Solver Foundation. With it you could write code like this: SolverContext context = SolverContext.GetContext(); Model model = context.CreateModel(); Decision a = new Decision(Domain.Real, "a"); Decision b = new Decision(Domain.Real, "b"); Decision c = new Decision(Domain.Real, "c"); model.AddDecisions(a,b,c); model.AddConstraint("eqA", -44.3940 == 50*a + 37*b + c); model.AddConstraint("eqB", -45.3049 == 43*a + 39*b + c); model.AddConstraint("eqC", -44.9594 == 52*a + 41*b + c); Solution solution = context.Solve(); string results = solution.GetReport().ToString(); Console.WriteLine(results); Here is the output: ===Solver Foundation Service Report=== Datetime: 04/20/2009 23:29:55 Model Name: Default Capabilities requested: LP Solve Time (ms): 1027 Total Time (ms): 1414 Solve Completion Status: Optimal Solver Selected: Microsoft.SolverFoundation.Solvers.SimplexSolver Directives: Microsoft.SolverFoundation.Services.Directive Algorithm: Primal Arithmetic: Hybrid Pricing (exact): Default Pricing (double): SteepestEdge Basis: Slack Pivot Count: 3 ===Solution Details=== Goals: Decisions: a: 0.0785250000000004 b: -0.180612500000001 c: -41.6375875 A: Are you looking for a software package that'll do the work or actually doing the matrix operations and such and do each step? The the first, a coworker of mine just used Ocaml GLPK. It is just a wrapper for the GLPK, but it removes a lot of the steps of setting things up. It looks like you're going to have to stick with the GLPK, in C, though. For the latter, thanks to delicious for saving an old article I used to learn LP awhile back, PDF. If you need specific help setting up further, let us know and I'm sure, me or someone will wander back in and help, but, I think it's fairly straight forward from here. Good Luck! A: Template Numerical Toolkit from NIST has tools for doing that. One of the more reliable ways is to use a QR Decomposition. Here's an example of a wrapper so that I can call "GetInverse(A, InvA)" in my code and it will put the inverse into InvA. void GetInverse(const Array2D<double>& A, Array2D<double>& invA) { QR<double> qr(A); invA = qr.solve(I); } Array2D is defined in the library. A: In terms of run-time efficiency, others have answered better than I. If you always will have the same number of equations as variables, I like Cramer's rule as it's easy to implement. Just write a function to calculate determinant of a matrix (or use one that's already written, I'm sure you can find one out there), and divide the determinants of two matrices. A: Cramer's Rule and Gaussian Elimination are two good, general-purpose algorithms (also see Simultaneous Linear Equations). If you're looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course). EDIT: I know you're working in C land, but I also have to put in a good word for SymPy (a computer algebra system in Python). You can learn a lot from its algorithms (if you can read a bit of python). Also, it's under the new BSD license, while most of the free math packages are GPL. A: Personally, I'm partial to the algorithms of Numerical Recipes. (I'm fond of the C++ edition.) This book will teach you why the algorithms work, plus show you some pretty-well debugged implementations of those algorithms. Of course, you could just blindly use CLAPACK (I've used it with great success), but I would first hand-type a Gaussian Elimination algorithm to at least have a faint idea of the kind of work that has gone into making these algorithms stable. Later, if you're doing more interesting linear algebra, looking around the source code of Octave will answer a lot of questions. A: From the wording of your question, it seems like you have more equations than unknowns and you want to minimize the inconsistencies. This is typically done with linear regression, which minimizes the sum of the squares of the inconsistencies. Depending on the size of the data, you can do this in a spreadsheet or in a statistical package. R is a high-quality, free package that does linear regression, among a lot of other things. There is a lot to linear regression (and a lot of gotcha's), but as it's straightforward to do for simple cases. Here's an R example using your data. Note that the "tx" is the intercept to your model. > y <- c(-44.394, -45.3049, -44.9594) > a <- c(50.0, 43.0, 52.0) > b <- c(37.0, 39.0, 41.0) > regression = lm(y ~ a + b) > regression Call: lm(formula = y ~ a + b) Coefficients: (Intercept) a b -41.63759 0.07852 -0.18061 A: You can solve this with a program exactly the same way you solve it by hand (with multiplication and subtraction, then feeding results back into the equations). This is pretty standard secondary-school-level mathematics. -44.3940 = 50a + 37b + c (A) -45.3049 = 43a + 39b + c (B) -44.9594 = 52a + 41b + c (C) (A-B): 0.9109 = 7a - 2b (D) (B-C): 0.3455 = -9a - 2b (E) (D-E): 1.2564 = 16a (F) (F/16): a = 0.078525 (G) Feed G into D: 0.9109 = 7a - 2b => 0.9109 = 0.549675 - 2b (substitute a) => 0.361225 = -2b (subtract 0.549675 from both sides) => -0.1806125 = b (divide both sides by -2) (H) Feed H/G into A: -44.3940 = 50a + 37b + c => -44.3940 = 3.92625 - 6.6826625 + c (substitute a/b) => -41.6375875 = c (subtract 3.92625 - 6.6826625 from both sides) So you end up with: a = 0.0785250 b = -0.1806125 c = -41.6375875 If you plug these values back into A, B and C, you'll find they're correct. The trick is to use a simple 4x3 matrix which reduces in turn to a 3x2 matrix, then a 2x1 which is "a = n", n being an actual number. Once you have that, you feed it into the next matrix up to get another value, then those two values into the next matrix up until you've solved all variables. Provided you have N distinct equations, you can always solve for N variables. I say distinct because these two are not: 7a + 2b = 50 14a + 4b = 100 They are the same equation multiplied by two so you cannot get a solution from them - multiplying the first by two then subtracting leaves you with the true but useless statement: 0 = 0 + 0 By way of example, here's some C code that works out the simultaneous equations that you're placed in your question. First some necessary types, variables, a support function for printing out an equation, and the start of main: #include <stdio.h> typedef struct { double r, a, b, c; } tEquation; tEquation equ1[] = { { -44.3940, 50, 37, 1 }, // -44.3940 = 50a + 37b + c (A) { -45.3049, 43, 39, 1 }, // -45.3049 = 43a + 39b + c (B) { -44.9594, 52, 41, 1 }, // -44.9594 = 52a + 41b + c (C) }; tEquation equ2[2], equ3[1]; static void dumpEqu (char *desc, tEquation *e, char *post) { printf ("%10s: %12.8lf = %12.8lfa + %12.8lfb + %12.8lfc (%s)\n", desc, e->r, e->a, e->b, e->c, post); } int main (void) { double a, b, c; Next, the reduction of the three equations with three unknowns to two equations with two unknowns: // First step, populate equ2 based on removing c from equ. dumpEqu (">", &(equ1[0]), "A"); dumpEqu (">", &(equ1[1]), "B"); dumpEqu (">", &(equ1[2]), "C"); puts (""); // A - B equ2[0].r = equ1[0].r * equ1[1].c - equ1[1].r * equ1[0].c; equ2[0].a = equ1[0].a * equ1[1].c - equ1[1].a * equ1[0].c; equ2[0].b = equ1[0].b * equ1[1].c - equ1[1].b * equ1[0].c; equ2[0].c = 0; // B - C equ2[1].r = equ1[1].r * equ1[2].c - equ1[2].r * equ1[1].c; equ2[1].a = equ1[1].a * equ1[2].c - equ1[2].a * equ1[1].c; equ2[1].b = equ1[1].b * equ1[2].c - equ1[2].b * equ1[1].c; equ2[1].c = 0; dumpEqu ("A-B", &(equ2[0]), "D"); dumpEqu ("B-C", &(equ2[1]), "E"); puts (""); Next, the reduction of the two equations with two unknowns to one equation with one unknown: // Next step, populate equ3 based on removing b from equ2. // D - E equ3[0].r = equ2[0].r * equ2[1].b - equ2[1].r * equ2[0].b; equ3[0].a = equ2[0].a * equ2[1].b - equ2[1].a * equ2[0].b; equ3[0].b = 0; equ3[0].c = 0; dumpEqu ("D-E", &(equ3[0]), "F"); puts (""); Now that we have a formula of the type number1 = unknown * number2, we can simply work out the unknown value with unknown <- number1 / number2. Then, once you've figured that value out, substitute it into one of the equations with two unknowns and work out the second value. Then substitute both those (now-known) unknowns into one of the original equations and you now have the values for all three unknowns: // Finally, substitute values back into equations. a = equ3[0].r / equ3[0].a; printf ("From (F ), a = %12.8lf (G)\n", a); b = (equ2[0].r - equ2[0].a * a) / equ2[0].b; printf ("From (D,G ), b = %12.8lf (H)\n", b); c = (equ1[0].r - equ1[0].a * a - equ1[0].b * b) / equ1[0].c; printf ("From (A,G,H), c = %12.8lf (I)\n", c); return 0; } The output of that code matches the earlier calculations in this answer: >: -44.39400000 = 50.00000000a + 37.00000000b + 1.00000000c (A) >: -45.30490000 = 43.00000000a + 39.00000000b + 1.00000000c (B) >: -44.95940000 = 52.00000000a + 41.00000000b + 1.00000000c (C) A-B: 0.91090000 = 7.00000000a + -2.00000000b + 0.00000000c (D) B-C: -0.34550000 = -9.00000000a + -2.00000000b + 0.00000000c (E) D-E: -2.51280000 = -32.00000000a + 0.00000000b + 0.00000000c (F) From (F ), a = 0.07852500 (G) From (D,G ), b = -0.18061250 (H) From (A,G,H), c = -41.63758750 (I) A: function x = LinSolve(A,y) % % Recursive Solution of Linear System Ax=y % matlab equivalent: x = A\y % x = n x 1 % A = n x n % y = n x 1 % Uses stack space extensively. Not efficient. % C allows recursion, so convert it into C. % ---------------------------------------------- n=length(y); x=zeros(n,1); if(n>1) x(1:n-1,1) = LinSolve( A(1:n-1,1:n-1) - (A(1:n-1,n)*A(n,1:n-1))./A(n,n) , ... y(1:n-1,1) - A(1:n-1,n).*(y(n,1)/A(n,n))); x(n,1) = (y(n,1) - A(n,1:n-1)*x(1:n-1,1))./A(n,n); else x = y(1,1) / A(1,1); end A: For general cases, you could use python along with numpy for Gaussian elimination. And then plug in values and get the remaining values.
{ "language": "en", "url": "https://stackoverflow.com/questions/769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: How do I use itertools.groupby()? I haven't been able to find an understandable explanation of how to actually use Python's itertools.groupby() function. What I'm trying to do is this: * *Take a list - in this case, the children of an objectified lxml element *Divide it into groups based on some criteria *Then later iterate over each of these groups separately. I've reviewed the documentation, but I've had trouble trying to apply them beyond a simple list of numbers. So, how do I use of itertools.groupby()? Is there another technique I should be using? Pointers to good "prerequisite" reading would also be appreciated. A: @CaptSolo, I tried your example, but it didn't work. from itertools import groupby [(c,len(list(cs))) for c,cs in groupby('Pedro Manoel')] Output: [('P', 1), ('e', 1), ('d', 1), ('r', 1), ('o', 1), (' ', 1), ('M', 1), ('a', 1), ('n', 1), ('o', 1), ('e', 1), ('l', 1)] As you can see, there are two o's and two e's, but they got into separate groups. That's when I realized you need to sort the list passed to the groupby function. So, the correct usage would be: name = list('Pedro Manoel') name.sort() [(c,len(list(cs))) for c,cs in groupby(name)] Output: [(' ', 1), ('M', 1), ('P', 1), ('a', 1), ('d', 1), ('e', 2), ('l', 1), ('n', 1), ('o', 2), ('r', 1)] Just remembering, if the list is not sorted, the groupby function will not work! A: Sorting and groupby from itertools import groupby val = [{'name': 'satyajit', 'address': 'btm', 'pin': 560076}, {'name': 'Mukul', 'address': 'Silk board', 'pin': 560078}, {'name': 'Preetam', 'address': 'btm', 'pin': 560076}] for pin, list_data in groupby(sorted(val, key=lambda k: k['pin']),lambda x: x['pin']): ... print pin ... for rec in list_data: ... print rec ... o/p: 560076 {'name': 'satyajit', 'pin': 560076, 'address': 'btm'} {'name': 'Preetam', 'pin': 560076, 'address': 'btm'} 560078 {'name': 'Mukul', 'pin': 560078, 'address': 'Silk board'} A: IMPORTANT NOTE: You have to sort your data first. The part I didn't get is that in the example construction groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) k is the current grouping key, and g is an iterator that you can use to iterate over the group defined by that grouping key. In other words, the groupby iterator itself returns iterators. Here's an example of that, using clearer variable names: from itertools import groupby things = [("animal", "bear"), ("animal", "duck"), ("plant", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")] for key, group in groupby(things, lambda x: x[0]): for thing in group: print("A %s is a %s." % (thing[1], key)) print("") This will give you the output: A bear is a animal. A duck is a animal. A cactus is a plant. A speed boat is a vehicle. A school bus is a vehicle. In this example, things is a list of tuples where the first item in each tuple is the group the second item belongs to. The groupby() function takes two arguments: (1) the data to group and (2) the function to group it with. Here, lambda x: x[0] tells groupby() to use the first item in each tuple as the grouping key. In the above for statement, groupby returns three (key, group iterator) pairs - once for each unique key. You can use the returned iterator to iterate over each individual item in that group. Here's a slightly different example with the same data, using a list comprehension: for key, group in groupby(things, lambda x: x[0]): listOfThings = " and ".join([thing[1] for thing in group]) print(key + "s: " + listOfThings + ".") This will give you the output: animals: bear and duck. plants: cactus. vehicles: speed boat and school bus. A: Sadly I don’t think it’s advisable to use itertools.groupby(). It’s just too hard to use safely, and it’s only a handful of lines to write something that works as expected. def my_group_by(iterable, keyfunc): """Because itertools.groupby is tricky to use The stdlib method requires sorting in advance, and returns iterators not lists, and those iterators get consumed as you try to use them, throwing everything off if you try to look at something more than once. """ ret = defaultdict(list) for k in iterable: ret[keyfunc(k)].append(k) return dict(ret) Use it like this: def first_letter(x): return x[0] my_group_by('four score and seven years ago'.split(), first_letter) to get {'f': ['four'], 's': ['score', 'seven'], 'a': ['and', 'ago'], 'y': ['years']} A: The example on the Python docs is quite straightforward: groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) So in your case, data is a list of nodes, keyfunc is where the logic of your criteria function goes and then groupby() groups the data. You must be careful to sort the data by the criteria before you call groupby or it won't work. groupby method actually just iterates through a list and whenever the key changes it creates a new group. A: How do I use Python's itertools.groupby()? You can use groupby to group things to iterate over. You give groupby an iterable, and a optional key function/callable by which to check the items as they come out of the iterable, and it returns an iterator that gives a two-tuple of the result of the key callable and the actual items in another iterable. From the help: groupby(iterable[, keyfunc]) -> create an iterator which returns (key, sub-iterator) grouped by each value of key(value). Here's an example of groupby using a coroutine to group by a count, it uses a key callable (in this case, coroutine.send) to just spit out the count for however many iterations and a grouped sub-iterator of elements: import itertools def grouper(iterable, n): def coroutine(n): yield # queue up coroutine for i in itertools.count(): for j in range(n): yield i groups = coroutine(n) next(groups) # queue up coroutine for c, objs in itertools.groupby(iterable, groups.send): yield c, list(objs) # or instead of materializing a list of objs, just: # return itertools.groupby(iterable, groups.send) list(grouper(range(10), 3)) prints [(0, [0, 1, 2]), (1, [3, 4, 5]), (2, [6, 7, 8]), (3, [9])] A: This basic implementation helped me understand this function. Hope it helps others as well: arr = [(1, "A"), (1, "B"), (1, "C"), (2, "D"), (2, "E"), (3, "F")] for k,g in groupby(arr, lambda x: x[0]): print("--", k, "--") for tup in g: print(tup[1]) # tup[0] == k -- 1 -- A B C -- 2 -- D E -- 3 -- F A: A neato trick with groupby is to run length encoding in one line: [(c,len(list(cgen))) for c,cgen in groupby(some_string)] will give you a list of 2-tuples where the first element is the char and the 2nd is the number of repetitions. Edit: Note that this is what separates itertools.groupby from the SQL GROUP BY semantics: itertools doesn't (and in general can't) sort the iterator in advance, so groups with the same "key" aren't merged. A: One useful example that I came across may be helpful: from itertools import groupby #user input myinput = input() #creating empty list to store output myoutput = [] for k,g in groupby(myinput): myoutput.append((len(list(g)),int(k))) print(*myoutput) Sample input: 14445221 Sample output: (1,1) (3,4) (1,5) (2,2) (1,1) A: from random import randint from itertools import groupby l = [randint(1, 3) for _ in range(20)] d = {} for k, g in groupby(l, lambda x: x): if not d.get(k, None): d[k] = list(g) else: d[k] = d[k] + list(g) the code above shows how groupby can be used to group a list based on the lambda function/key supplied. The only problem is that the output is not merged, this can be easily resolved using a dictionary. Example: l = [2, 1, 2, 3, 1, 3, 2, 1, 3, 3, 1, 3, 2, 3, 1, 2, 1, 3, 2, 3] after applying groupby the result will be: for k, g in groupby(l, lambda x:x): print(k, list(g)) 2 [2] 1 [1] 2 [2] 3 [3] 1 [1] 3 [3] 2 [2] 1 [1] 3 [3, 3] 1 [1] 3 [3] 2 [2] 3 [3] 1 [1] 2 [2] 1 [1] 3 [3] 2 [2] 3 [3] Once a dictionary is used as shown above following result is derived which can be easily iterated over: {2: [2, 2, 2, 2, 2, 2], 1: [1, 1, 1, 1, 1, 1], 3: [3, 3, 3, 3, 3, 3, 3, 3]} A: Another example: for key, igroup in itertools.groupby(xrange(12), lambda x: x // 5): print key, list(igroup) results in 0 [0, 1, 2, 3, 4] 1 [5, 6, 7, 8, 9] 2 [10, 11] Note that igroup is an iterator (a sub-iterator as the documentation calls it). This is useful for chunking a generator: def chunker(items, chunk_size): '''Group items in chunks of chunk_size''' for _key, group in itertools.groupby(enumerate(items), lambda x: x[0] // chunk_size): yield (g[1] for g in group) with open('file.txt') as fobj: for chunk in chunker(fobj): process(chunk) Another example of groupby - when the keys are not sorted. In the following example, items in xx are grouped by values in yy. In this case, one set of zeros is output first, followed by a set of ones, followed again by a set of zeros. xx = range(10) yy = [0, 0, 0, 1, 1, 1, 0, 0, 0, 0] for group in itertools.groupby(iter(xx), lambda x: yy[x]): print group[0], list(group[1]) Produces: 0 [0, 1, 2] 1 [3, 4, 5] 0 [6, 7, 8, 9] A: WARNING: The syntax list(groupby(...)) won't work the way that you intend. It seems to destroy the internal iterator objects, so using for x in list(groupby(range(10))): print(list(x[1])) will produce: [] [] [] [] [] [] [] [] [] [9] Instead, of list(groupby(...)), try [(k, list(g)) for k,g in groupby(...)], or if you use that syntax often, def groupbylist(*args, **kwargs): return [(k, list(g)) for k, g in groupby(*args, **kwargs)] and get access to the groupby functionality while avoiding those pesky (for small data) iterators all together. A: itertools.groupby is a tool for grouping items. From the docs, we glean further what it might do: # [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B # [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D groupby objects yield key-group pairs where the group is a generator. Features * *A. Group consecutive items together *B. Group all occurrences of an item, given a sorted iterable *C. Specify how to group items with a key function * Comparisons # Define a printer for comparing outputs >>> def print_groupby(iterable, keyfunc=None): ... for k, g in it.groupby(iterable, keyfunc): ... print("key: '{}'--> group: {}".format(k, list(g))) # Feature A: group consecutive occurrences >>> print_groupby("BCAACACAADBBB") key: 'B'--> group: ['B'] key: 'C'--> group: ['C'] key: 'A'--> group: ['A', 'A'] key: 'C'--> group: ['C'] key: 'A'--> group: ['A'] key: 'C'--> group: ['C'] key: 'A'--> group: ['A', 'A'] key: 'D'--> group: ['D'] key: 'B'--> group: ['B', 'B', 'B'] # Feature B: group all occurrences >>> print_groupby(sorted("BCAACACAADBBB")) key: 'A'--> group: ['A', 'A', 'A', 'A', 'A'] key: 'B'--> group: ['B', 'B', 'B', 'B'] key: 'C'--> group: ['C', 'C', 'C'] key: 'D'--> group: ['D'] # Feature C: group by a key function >>> # islower = lambda s: s.islower() # equivalent >>> def islower(s): ... """Return True if a string is lowercase, else False.""" ... return s.islower() >>> print_groupby(sorted("bCAaCacAADBbB"), keyfunc=islower) key: 'False'--> group: ['A', 'A', 'A', 'B', 'B', 'C', 'C', 'D'] key: 'True'--> group: ['a', 'a', 'b', 'b', 'c'] Uses * *Anagrams (see notebook) *Binning *Group odd and even numbers *Group a list by values *Remove duplicate elements *Find indices of repeated elements in an array *Split an array into n-sized chunks *Find corresponding elements between two lists *Compression algorithm (see notebook)/Run Length Encoding *Grouping letters by length, key function (see notebook) *Consecutive values over a threshold (see notebook) *Find ranges of numbers in a list or continuous items (see docs) *Find all related longest sequences *Take consecutive sequences that meet a condition (see related post) Note: Several of the latter examples derive from Víctor Terrón's PyCon (talk) (Spanish), "Kung Fu at Dawn with Itertools". See also the groupby source code written in C. * A function where all items are passed through and compared, influencing the result. Other objects with key functions include sorted(), max() and min(). Response # OP: Yes, you can use `groupby`, e.g. [do_something(list(g)) for _, g in groupby(lxml_elements, criteria_func)] A: I would like to give another example where groupby without sort is not working. Adapted from example by James Sulak from itertools import groupby things = [("vehicle", "bear"), ("animal", "duck"), ("animal", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")] for key, group in groupby(things, lambda x: x[0]): for thing in group: print "A %s is a %s." % (thing[1], key) print " " output is A bear is a vehicle. A duck is a animal. A cactus is a animal. A speed boat is a vehicle. A school bus is a vehicle. there are two groups with vehicule, whereas one could expect only one group A: The key thing to recognize with itertools.groupby is that items are only grouped together as long as they're sequential in the iterable. This is why sorting works, because basically you're rearranging the collection so that all of the items which satisfy callback(item) now appear in the sorted collection sequentially. That being said, you don't need to sort the list, you just need a collection of key-value pairs, where the value can grow in accordance to each group iterable yielded by groupby. i.e. a dict of lists. >>> things = [("vehicle", "bear"), ("animal", "duck"), ("animal", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")] >>> coll = {} >>> for k, g in itertools.groupby(things, lambda x: x[0]): ... coll.setdefault(k, []).extend(i for _, i in g) ... {'vehicle': ['bear', 'speed boat', 'school bus'], 'animal': ['duck', 'cactus']}
{ "language": "en", "url": "https://stackoverflow.com/questions/773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "664" }
Q: ASP, need to use SFTP This is ASP classic, not .Net. We have to get a way to SFTP into a server to upload and download a couple of files, kicked off by a user. What have other people used to do SFTP in ASP classic? Not necessarily opposed to purchasing a control. A: If you have the ability to use WScript.Shell then you can just execute pscp.exe from the Putty package. Obviously this is less then ideal but it will get the job done and let you use SCP/SFTP in classic ASP. A: The way I have done this is to create a command script file and pass this on the command line via the /b command to psftp.exe. I have also tried this in Perl and have yet to find a neater way of doing it. There is an issue with this method, in that you already have to have accepted the RSA finger-print. If not, then the script will either wait for user input to accept it or will skip over it if you are running in full batch mode, with a failure. Also, if the server changes so that it's RSA finger-print changes (e.g. a cluster) then you need to re-accept the finger-print again. Not an ideal method, but the only one I know. I shall be watching this question incase anyone knows another way. A: There is an issue with this method, in that you already have to have accepted the RSA finger-print. If not, then the script will either wait for user input to accept it or will skip over it if you are running in full batch mode, with a failure. Also, if the server changes so that it's RSA finger-print changes (e.g. a cluster) then you need to re-accept the finger-print again. A: I used to do that with FTP on Windows (create a file of commands and shell out FTP.exe). A: I've previously used a component from here: www.weonlydo.com. I didn't find it the easiest piece of kit to develop against but it got the job done in a hurry. A: December 2020 : * *ASP is dead, it has been superseded by ASP .Net 18 years ago. *At this time, the most common way to use SFTP in .Net is to use the SSH.NET NuGet package. Maybe this question should be closed ?
{ "language": "en", "url": "https://stackoverflow.com/questions/805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }