Why does the divide and average method for computing square roots work?
The divide and average method for computing square roots starts with a guess of the square root. For example, to compute the square root of 8, we might start with 3, since we know 9 is the square of 3. Given a guess, the method is a formula for generating the next guess. The formula states the next guess should be average of the current guess and the square (the number for which you want compute the square root) divided by the guess. Returning to our example, the next guess is (3 + 8/3)/2 = 17/6. Applying this formula repeatedly yields a guess which is closer and closer to actual square root.

The reason this works is that limit of the sequence of guesses generated by this formula is the square root. To see this, first recognize that in the limit, next guess must be equal to the current guess. The guesses get closer and closer until, after an infinite number of steps, they are the same. Using a variable X to represent the limit and applying the formula yields an equation which can be solved for X via algebra. The solution is that X must be square root.