Question:

# How do you calculate a range of numbers in excel?

## Select a cell below or to the right of the numbers for which you want to find the smallest number. Click the arrow next to AutoSum Button image, and then click Min (calculates the smallest), or Max (calculates the largest), and then press ENTER More?

In mathematics and computer science, the binary numeral system, or base-2 numeral system, represents numeric values using two symbols: typically 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2. Numbers represented in this system are commonly called binary numbers. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used internally by almost all modern computers and computer-based devices such as mobile phones. The Indian scholar Pingala (around 5th–2nd centuries BC) developed mathematical concepts for describing prosody, and in doing so presented the first known description of a binary numeral system. He used binary numbers in the form of short and long syllables (the latter equal in length to two short syllables), making it similar to Morse code. Pingala's Hindu classic titled Chandaḥśāstra (8.23) describes the formation of a matrix in order to give a unique value to each meter. An example of such a matrix is as follows (note that these binary representations are "backwards" compared to modern, Western positional notation): A set of eight trigrams (Bagua) and a set of 64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in usage at least as early as the Zhou Dynasty of ancient China through the classic text Yijing. In the 11th century, scholar and philosopher Shao Yong developed a method for arranging the hexagrams which corresponds, albeit unintentionally, to the sequence 0 to 63, as represented in binary, with yin as 0, yang as 1 and the least significant bit on top. The ordering is also the lexicographical order on sextuples of elements chosen from a two-element set. Similar sets of binary combinations have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been widely applied in sub-Saharan Africa. In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text. Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature". (See Bacon's cipher.) The modern binary number system was discovered by Gottfried Leibniz in 1679. See his article:Explication de l'Arithmétique Binaire(1703). Leibniz's system uses 0 and 1, like the modern binary numeral system. As a Sinophile, Leibniz was aware of the Yijing (or I-Ching) and noted with fascination how its hexagrams correspond to the binary numbers from 0 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. In 1854, British mathematician George Boole published a landmark paper detailing an algebraic system of logic that would become known as Boolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry. In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design. In November 1937, George Stibitz, then working at Bell Labs, completed a relay-based computer he dubbed the "Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition. Bell Labs thus authorized a full research programme in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration were John Von Neumann, John Mauchly and Norbert Wiener, who wrote about it in his memoirs. Any number can be represented by any sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. The following sequence of symbols could all be interpreted as the binary numeric value of 667: The numeric value represented in each case is dependent upon the value assigned to each symbol. In a computer, the numeric values may be represented by two different voltages; on a magnetic disk, magnetic polarities may be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use. In keeping with customary representation of numerals using Arabic numerals, binary numbers are commonly written using the symbols 0 and 1. When written, binary numerals are often subscripted, prefixed or suffixed in order to indicate their base, or radix. The following notations are equivalent: When spoken, binary numerals are usually read digit-by-digit, in order to distinguish them from decimal numerals. For example, the binary numeral 100 is pronounced one zero zero, rather than one hundred, to make its binary nature explicit, and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral as one hundred (a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correct value), but this does not make its binary nature explicit. Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiar decimal counting system as a frame of reference. Decimal counting uses the ten symbols 0 through 9. Counting primarily involves incremental manipulation of the "low-order" digit, or the rightmost digit, often called the "first digit". When the available symbols for the low-order digit are exhausted, the next-higher-order digit (located one position to the left) is incremented, and counting in the low-order digit starts over at 0. In decimal, counting proceeds like so: After a digit reaches 9, an increment resets it to 0 but also causes an increment of the next digit to the left. In binary, counting follows similar procedure, except that only the two symbols 0 and 1 are used. Thus, after a digit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next digit to the left: Since binary is a base-2 system, each digit represents an increasing power of 2, with the rightmost digit representing 20, the next representing 21, then 22, and so on. To determine the decimal representation of a binary number simply take the sum of the products of the binary digits and the powers of 2 which they represent. For example, the binary number 100101 is converted to decimal form as follows: To create higher numbers, additional digits are simply added to the left side of the binary representation. Fractions in binary only terminate if the denominator has 2 as the only prime factor. As a result, 1/10 does not have a finite binary representation, and this causes 10 × 0.1 not to be precisely equal to 1 in floating point arithmetic. As an example, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 × 2−1 + 1 × 2−2 + 0 × 2−3 + 1 × 2−4 + ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever. Arithmetic in binary is much like arithmetic in other numeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals. The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying: Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (36 decimal). When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for any two bits x and y allows for very fast calculation, as well. A simplification for many binary addition problems is the Long Carry Method or Brookhouse Method of Binary Addition. This method is generally useful in any binary addition where one of the numbers contains a long "string" of ones. It is based on the simple premise that under the binary system, when given a "string" of digits composed entirely of n ones (where: n is any integer length), adding 1 will result in the number 1 followed by a string of n zeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string of n 9's will result in the number 1 followed by a string of n 0's: Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02 (95810) and 1 0 1 0 1 1 0 0 1 12 (69110), using the traditional carry method on the left, and the long carry method on the right: The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 12 (164910). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort. The binary addition table is similar, but not the same, as the truth table of the logical disjunction operation $\or$. The difference is that $1$$\or$$1=1$, while $1+1=10$. Subtraction works in much the same way: Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known as borrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value. Subtracting a positive number is equivalent to adding a negative number of equal absolute value; computers typically use two's complement notation to represent negative values. This notation eliminates the need for a separate "subtract" operation. Using two's complement notation subtraction can be summarized by the following formula: A − B = A + not B + 1
Multiplication in binary is similar to its decimal counterpart. Two numbers A and B can be multiplied by partial products: for each digit in B, the product of that digit in A is calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit in B that was used. The sum of all these partial products gives the final result. Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication: For example, the binary numbers 1011 and 1010 are multiplied as follows: Binary numbers can also be multiplied with bits after a binary point: See also Booth's multiplication algorithm. The binary multiplication table is the same as the Truth table of the Logical conjunction operation $\and$. Binary division is again similar to its decimal counterpart: Here, the divisor is 1012, or 5 decimal, while the dividend is 110112, or 27 decimal. The procedure is the same as that of decimal long division; here, the divisor 1012 goes into the first three digits 1102 of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence: The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted: Thus, the quotient of 110112 divided by 1012 is 1012, as shown on the top line, while the remainder, shown on the bottom line, is 102. In decimal, 27 divided by 5 is 5, with a remainder of 2. Binary square root is similar to its decimal counterpart too. But, it's simpler than that in decimal. for example Though not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulated using Boolean logical operators. When a string of binary symbols is manipulated in this way, it is called a bitwise operation; the logical operators AND, OR, and XOR may be performed on corresponding bits in two binary numerals provided as input. The logical NOT operation may be performed on individual bits in a single binary numeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have other computational benefits as well. For example, an arithmetic shift left of a binary number is the equivalent of multiplication by a (positive, integral) power of 2. To convert from a base-10 integer numeral to its base-2 (binary) equivalent, the number is divided by two, and the remainder is the least-significant bit. The (integer) result is again divided by two, its remainder is the next least significant bit. This process repeats until the quotient becomes zero. Conversion from base-2 to base-10 proceeds by applying the preceding algorithm, so to speak, in reverse. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, repeatedly double the prior value and add the next bit to produce the next value. This can be organized in a multi-column table. For example to convert 100101011012 to decimal: The result is 119710. Note that the first Prior Value of 0 is simply an initial decimal value. This method is an application of the Horner scheme. The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving. In a fractional binary number such as 0.110101101012, the first digit is $\begin{matrix} \frac{1}{2} \end{matrix}$, the second $\begin{matrix} (\frac{1}{2})^2 = \frac{1}{4} \end{matrix}$, etc. So if there is a 1 in the first place after the decimal, then the number is at least $\begin{matrix} \frac{1}{2} \end{matrix}$, and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part. For example, $\begin{matrix} (\frac{1}{3}) \end{matrix}$10, in binary, is: Thus the repeating decimal fraction 0.... is equivalent to the repeating binary fraction 0.... . Or for example, 0.110, in binary, is: This is also a repeating binary fraction 0.0... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 0.1 + ... + 0.1, (10 additions) differs from 1 in floating point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not. The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example: Another way of converting from binary to decimal, often quicker for a person familiar with hexadecimal, is to do so indirectly—first converting ($x$ in binary) into ($x$ in hexadecimal) and then converting ($x$ in hexadecimal) into ($x$ in decimal). For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divide-and-conquer algorithm is more effective asymptotically: given a binary number, it is divided by 10k, where k is chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two are concatenated. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10k and added to the second converted piece, where k is the number of decimal digits in the second, least-significant piece before conversion. Binary may be converted to and from hexadecimal somewhat more easily. This is because the radix of the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the table to the right. To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits: To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra 0 bits at the left (called padding). For example: To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values: Binary is also easily converted to the octal numeral system, since octal uses a radix of 8, which is a power of two (namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits of hexadecimal in the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth. Converting from octal to binary proceeds in the same fashion as it does for hexadecimal: And from binary to octal: And from octal to decimal: Non-integers can be represented by using negative powers, which are set off from the other digits by means of a radix point (called a decimal point in the decimal system). For example, the binary number 11.012 thus means: For a total of 3.25 decimal. All dyadic rational numbers $\frac{p}{2^a}$ have a terminating binary numeral—the binary representation has a finite number of terms after the radix point. Other rational numbers have binary representation, but instead of terminating, they recur, with a finite sequence of digits repeating indefinitely. For instance The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation in decimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that 0.111111… is the sum of the geometric series 2−1 + 2−2 + 2−3 + ... which is 1. Binary numerals which neither terminate nor recur represent irrational numbers. For instance,

An electronic calculator is a small, portable, often inexpensive electronic device used to perform both basic and complex operations of arithmetic. The first solid state electronic calculator was created in the 1960s, building on the extensive history of tools such as the abacus, developed around 2000 BC, and the mechanical calculator, developed in the 17th century. It was developed in parallel with the analog computers of the day. Pocket sized devices became available in the 1970s, especially after the invention of the microprocessor developed by Intel for the Japanese calculator company Busicom. Modern electronic calculators vary from cheap, give-away, credit-card sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as integrated circuits made their size and cost small. By the end of that decade, calculator prices had reduced to a point where a basic calculator was affordable to most and they became common in schools. Computer operating systems as far back as early Unix have included interactive calculator programs such as dc and hoc, and calculator functions are included in almost all PDA-type devices (save a few dedicated address book and dictionary devices). In addition to general purpose calculators, there are those designed for specific markets; for example, there are scientific calculators which include trigonometric and statistical calculations. Some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, or higher dimensional Euclidean space. In 1986, calculators still represented an estimated 41% of the world's general-purpose hardware capacity to compute information. This diminished to less than 0.05% by 2007. Modern electronic calculators contain a keyboard with buttons for digits and arithmetical operations. Some even contain 00 and 000 buttons to make large numbers easier to enter. Most basic calculators assign only one digit or operation on each button. However, in more specific calculators, a button can perform multi-function working with key combination or current reckoning mode. Calculators usually have liquid crystal displays as output in place of historical vacuum fluorescent displays. See more details in technical improvements. Fractions such as are displayed as decimal approximations, for example rounded to . Also, some fractions such as which is (to 14 significant figures) can be difficult to recognize in decimal form; as a result, many scientific calculators are able to work in vulgar fractions or mixed numbers. Calculators also have the ability to store numbers into memory. Basic types of these store only one number at a time. More specific types are able to store many numbers represented in variables. The variables can also be used for constructing formulae. Some models have the ability to extend memory capacity to store more numbers; the extended address is referred to as an array index. Power sources of calculators are batteries, solar cells or electricity (for old models) turning on with a switch or button. Some models even have no turn-off button but they provide some way to put off, for example, leaving no operation for a moment, covering solar cell exposure, or closing their lid. Crank-powered calculators were also common in the early computer era. In most countries, students use calculators for schoolwork. There was some initial resistance to the idea out of fear that basic arithmetic skills would suffer. There remains disagreement about the importance of the ability to perform calculations "in the head", with some curricula restricting calculator use until a certain level of proficiency has been obtained, while others concentrate more on teaching estimation techniques and problem-solving. Research suggests that inadequate guidance in the use of calculating tools can restrict the kind of mathematical thinking that students engage in. Others have argued][ that calculator use can even cause core mathematical skills to atrophy, or that such use can prevent understanding of advanced algebraic concepts.][ In December 2011 the UK's Minister of State for Schools, Nick Gibb, voiced concern that children can become "too dependent" on the use of calculators. As a result, the use of calculators is to be included as part of a review of the National Curriculum. Scratch papers are new alternatives when calculator sales decreased in 2007. In general, a basic electronic calculator consists of the following components: A basic explanation as to how calculations are performed in a simple 4-function calculator: To perform the calculation 25 + 9, one presses keys in the following sequence on most calculators: 2 5 + 9 =. All other functions are usually carried out using repeated additions. Where calculators have additional functions such as square root, or trigonometric functions, software algorithms are required to produce high precision results. Sometimes significant design effort is required to fit all the desired functions in the limited memory space available in the calculator chip, with acceptable calculation time. The fundamental difference between a calculator and computer is that a computer can be programmed in a way that allows the program to take different branches according to intermediate results, while calculators are pre-designed with specific functions such as addition, multiplication, and logarithms built in. The distinction is not clear-cut: some devices classed as programmable calculators have programming functionality, sometimes with support for programming languages such as RPL or TI-BASIC. Typically the user buys the least expensive model having a specific feature set, but does not care much about speed (since speed is constrained by how fast the user can press the buttons). Thus designers of calculators strive to minimize the number of logic elements on the chip, not the number of clock cycles needed to do a computation. For instance, instead of a hardware multiplier, a calculator might implement floating point mathematics with code in ROM, and compute trigonometric functions with the CORDIC algorithm because CORDIC does not require hardware floating-point. Bit serial logic designs are more common in calculators whereas bit parallel designs dominate general-purpose computers, because a bit serial design minimizes chip complexity, but takes many more clock cycles. (Again, the line blurs with high-end calculators, which use processor chips associated with computer and embedded systems design, particularly the Z80, MC68000, and ARM architectures, as well as some custom designs specifically made for the calculator market.) The first known tool used to aid arithmetic calculations was the Abacus, devised by Sumerians and Egyptians before 2000 BC. Except for the Antikythera mechanism, an "out of the time" astronomical device, development of computing tools arrived in the beginning of the 17th century: Geometric-military compass by Galileo, Logarithms and Napier Bones by Napier, slide rule by Edmund Gunter. In 1642, the Renaissance saw the invention of the mechanical calculator by the famous intellectual Blaise Pascal, a device that will eventually perform all four arithmetic operations without relying on human intelligence. Pascal's Calculator could add and subtract two numbers directly and multiply and divide by repetition. He was followed by Gottfried Leibniz who spent forty years designing a four-operation mechanical calculator, inventing in the process his leibniz wheel, but who couldn't design a fully operational machine. There were also five unsuccessful attempts to design a calculating clock in the 17th century. The 18th century saw the arrival of some interesting improvements, first by Poleni with the first fully functional calculating clock and four-operation machine, but these machines were almost always one of the kind. It was not until the 19th century and the Industrial Revolution that real developments began to occur. Although machines capable of performing all four arithmetic functions existed prior to the 19th century, the refinement of manufacturing and fabrication processes during the eve of the industrial revolution made large scale production of more compact and modern units possible. The Arithmometer, invented in 1820 as a four-operation mechanical calculator, was released to production in 1851 as an adding machine and became the first commercially successful unit; forty years later, by 1890, about 2,500 arithmometers had been sold plus a few hundreds more from two arithmometer clone makers (Burkhardt, Germany, 1878 and Layton, UK, 1883) and Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers. It wasn't until 1902 that the familiar push-button user interface was developed, with the introduction of the Dalton Adding Machine, developed by James L. Dalton in the United States. The Curta calculator was developed in 1948 and, although costly, became popular for its portability. This purely mechanical hand-held device could do addition, subtraction, multiplication and division. By the early 1970s electronic pocket calculators ended manufacture of mechanical calculators, although the Curta remains a popular collectable item. The first mainframe computers, using firstly vacuum tubes and later transistors in the logic circuits, appeared in the 1940s and 1950s. This technology was to provide a stepping stone to the development of electronic calculators. The Casio Computer Company, in Japan, released the Model 14-A calculator in 1957, which was the world's first all-electric (relatively) "compact" calculator. It did not use electronic logic but was based on relay technology, and was built into a desk. In October 1961 the world's first all-electronic desktop calculator, the British Bell Punch/Sumlock Comptometer ANITA (A New Inspiration To Arithmetic/Accounting) was announced. This machine used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. Two models were displayed, the Mk VII for continental Europe and the Mk VIII for Britain and the rest of the world, both for delivery from early 1962. The Mk VII was a slightly earlier design with a more complicated mode of multiplication, and was soon dropped in favour of the simpler Mark VIII. The ANITA had a full keyboard, similar to mechanical comptometers of the time, a feature that was unique to it and the later Sharp CS-10A among electronic calculators. Bell Punch had been producing key-driven mechanical calculators of the comptometer type under the names "Plus" and "Sumlock", and had realised in the mid-1950s that the future of calculators lay in electronics. They employed the young graduate Norbert Kitz, who had worked on the early British Pilot ACE computer project, to lead the development. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology of the ANITA was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a 5-inch (13 cm) CRT, and introduced reverse Polish notation (RPN) to the calculator market for a price of $2200, which was about three times the cost of an electromechanical calculator of the time. Like Bell Punch, Friden was a manufacturer of mechanical calculators that had decided that the future lay in electronics. In 1964 more all-transistor electronic calculators were introduced: Sharp introduced the CS-10A, which weighed 25 kg (55 lb) and cost 500,000 yen (~US$2500), and Industria Macchine Elettroniche of Italy introduced the IME 84, to which several extra keyboard and display units could be connected so that several people could make use of it (but apparently not at the same time). There followed a series of electronic calculator models from these and other manufacturers, including Canon, Mathatronics, Olivetti, SCM (Smith-Corona-Marchant), Sony, Toshiba, and Wang. The early calculators used hundreds of germanium transistors, which were cheaper than silicon transistors, on multiple circuit boards. Display types used were CRT, cold-cathode Nixie tubes, and filament lamps. Memory technology was usually based on the delay line memory or the magnetic core memory, though the Toshiba "Toscal" BC-1411 appears to have used an early form of dynamic RAM built from discrete components. Already there was a desire for smaller and less power-hungry machines. The Olivetti Programma 101 was introduced in late 1965; it was a stored program machine which could read and write magnetic cards and displayed results on its built-in printer. Memory, implemented by an acoustic delay line, could be partitioned between program steps, constants, and data registers. Programming allowed conditional testing and programs could also be overlaid by reading from magnetic cards. It is regarded as the first personal computer produced by a company (that is, a desktop electronic calculating machine programmable by non-specialists for personal use). The Olivetti Programma 101 won many industrial design awards. The Monroe Epic programmable calculator came on the market in 1967. A large, printing, desk-top unit, with an attached floor-standing logic tower, it could be programmed to perform many computer-like functions. However, the only branch instruction was an implied unconditional branch (GOTO) at the end of the operation stack, returning the program to its starting instruction. Thus, it was not possible to include any conditional branch (IF-THEN-ELSE) logic. During this era, the absence of the conditional branch was sometimes used to distinguish a programmable calculator from a computer. The first handheld calculator, a prototype called "Cal Tech", was developed by Texas Instruments in 1967. It could add, multiply, subtract, and divide, and its output device was a paper tape. The electronic calculators of the mid-1960s were large and heavy desktop machines due to their use of hundreds of transistors on several circuit boards with a large power consumption that required an AC power supply. There were great efforts to put the logic required for a calculator into fewer and fewer integrated circuits (chips) and calculator electronics was one of the leading edges of semiconductor development. U.S. semiconductor manufacturers led the world in Large Scale Integration (LSI) semiconductor development, squeezing more and more functions into individual integrated circuits. This led to alliances between Japanese calculator manufacturers and U.S. semiconductor companies: Canon Inc. with Texas Instruments, Hayakawa Electric (later known as Sharp Corporation) with North-American Rockwell Microelectronics, Busicom with Mostek and Intel, and General Instrument with Sanyo. By 1970, a calculator could be made using just a few chips of low power consumption, allowing portable models powered from rechargeable batteries. The first portable calculators appeared in Japan in 1970, and were soon marketed around the world. These included the Sanyo ICC-0081 "Mini Calculator", the Canon Pocketronic, and the Sharp QT-8B "micro Compet". The Canon Pocketronic was a development of the "Cal-Tech" project which had been started at Texas Instruments in 1965 as a research project to produce a portable calculator. The Pocketronic has no traditional display; numerical output is on thermal paper tape. As a result of the "Cal-Tech" project, Texas Instruments was granted master patents on portable calculators. Sharp put in great efforts in size and power reduction and introduced in January 1971 the Sharp EL-8, also marketed as the Facit 1111, which was close to being a pocket calculator. It weighed about 455 grams or one pound, had a vacuum fluorescent display, rechargeable NiCad batteries, and initially sold for $395. However, the efforts in integrated circuit development culminated in the introduction in early 1971 of the first "calculator on a chip", the MK6010 by Mostek, followed by Texas Instruments later in the year. Although these early hand-held calculators were very expensive, these advances in electronics, together with developments in display technology (such as the vacuum fluorescent display, LED, and LCD), led within a few years to the cheap pocket calculator available to all. In 1971 Pico Electronics. and General Instrument also introduced their first collaboration in ICs, a complete single chip calculator IC for the Monroe Royal Digital III calculator. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. Pico and GI went on to have significant success in the burgeoning handheld calculator market. The first truly pocket-sized electronic calculator was the Busicom LE-120A "HANDY", which was marketed early in 1971. Made in Japan, this was also the first calculator to use an LED display, the first hand-held calculator to use a single integrated circuit (then proclaimed as a "calculator on a chip"), the Mostek MK6010, and the first electronic calculator to run off replaceable batteries. Using four AA-size cells the LE-120A measures 4.9x2.8x0.9 in (124x72x24 mm). The first American-made pocket-sized calculator, the Bowmar 901B (popularly referred to as The Bowmar Brain), measuring 5.2 × 3.0 × 1.5 in (131 × 77 × 37 mm), came out in the Autumn of 1971, with four functions and an eight-digit red LED display, for$240, while in August 1972 the four-function Sinclair Executive became the first slimline pocket calculator measuring 5.4 × 2.2 × 0.35 in (138 × 56 × 9 mm) and weighing 2.5 oz (70g). It retailed for around $150 (£79). By the end of the decade, similar calculators were priced less than$10 (£5). The first Soviet-made pocket-sized calculator, the "Elektronika B3-04" was developed by the end of 1973 and sold at the beginning of 1974. One of the first low-cost calculators was the Sinclair Cambridge, launched in August 1973. It retailed for £29.95, or £5 less in kit form. The Sinclair calculators were successful because they were far cheaper than the competition; however, their design was flawed and their accuracy in some functions was questionable. The scientific programmable models were particularly poor in this respect, with the programmability comings at a heavy price in Transcendental function accuracy.][ Meanwhile Hewlett Packard (HP) had been developing a pocket calculator. Launched in early 1972 it was unlike the other basic four-function pocket calculators then available in that it was the first pocket calculator with scientific functions that could replace a slide rule. The $395 HP-35, along with nearly all later HP engineering calculators, used reverse Polish notation (RPN), also called postfix notation. A calculation like "8 plus 5" is, using RPN, performed by pressing "8", "Enter↑", "5", and "+"; instead of the algebraic infix notation: "8", "+", "5", "=". The first Soviet scientific pocket-sized calculator the "B3-18" was completed by the end of 1975. In 1973, Texas Instruments (TI) introduced the SR-10, (SR signifying slide rule) an algebraic entry pocket calculator using scientific notation for$150. Shortly after the SR-11 featured an additional key for entering "π". It was followed the next year by the SR-50 which added log and trig functions to compete with the HP-35, and in 1977 the mass-marketed TI-30 line which is still produced. In 1978 a new company, Calculated Industries, came onto the scene, focusing on specific markets. Their first calculator, the Loan Arranger (1978) was a pocket calculator marketed to the Real Estate industry with preprogrammed functions to simplify the process of calculating payments and future values. In 1985, CI launched a calculator for the construction industry called the Construction Master which came preprogrammed with common construction calculations (such as angles, stairs, roofing math, pitch, rise, run, and feet-inch fraction conversions). This would be the first in a line of construction related calculators. The first desktop programmable calculators were produced in the mid-1960s by Mathatronics and Casio (AL-1000). These machines were, however, very heavy and expensive. The first programmable pocket calculator was the HP-65, in 1974; it had a capacity of 100 instructions, and could store and retrieve programs with a built-in magnetic card reader. Two years later the HP-25C introduced continuous memory, i.e. programs and data were retained in CMOS memory during power-off. In 1979, HP released the first alphanumeric, programmable, expandable calculator, the HP-41C. It could be expanded with RAM (memory) and ROM (software) modules, as well as peripherals like bar code readers, microcassette and floppy disk drives, paper-roll thermal printers, and miscellaneous communication interfaces (RS-232, HP-IL, HP-IB). The first Soviet programmable desktop calculator ISKRA 123, powered by the power grid, was released at the beginning of the 1970s. The first Soviet pocket battery-powered programmable calculator, Elektronika "B3-21", was developed by the end of 1977 and released at the beginning of 1978. The successor of B3-21, the Elektronika B3-34 wasn't backward compatible with B3-21, even if it kept the reverse Polish notation (RPN). Thus B3-34 defined a new command set, which later was used in a series of later programmable Soviet calculators. Despite very limited capabilities (98 bytes of instruction memory and about 19 stack and addressable registers), people managed to write all kinds of programs for them, including adventure games and libraries of calculus-related functions for engineers. Hundreds, perhaps thousands, of programs were written for these machines, from practical scientific and business software, which were used in real-life offices and labs, to fun games for children. The Elektronika MK-52 calculator (using the extended B3-34 command set, and featuring internal EEPROM memory for storing programs and external interface for EEPROM cards and other periphery) was used in Soviet spacecraft program (for Soyuz TM-7 flight) as a backup of the board computer. This series of calculators was also noted for a large number of highly counter-intuitive mysterious undocumented features, somewhat similar to "synthetic programming" of the American HP-41, which were exploited by applying normal arithmetic operations to error messages, jumping to non-existent addresses and other techniques. A number of respected monthly publications, including the popular science magazine "Наука и жизнь" ("Science and Life"), featured special columns, dedicated to optimization techniques for calculator programmers and updates on undocumented features for hackers, which grew into a whole esoteric science with many branches, known as "yeggogology" ("еггогология"). The error messages on those calculators appear as a Russian word "YEGGOG" ("ЕГГОГ") which, unsurprisingly, is translated to "Error". A similar hacker culture in the USA revolved around the HP-41, which was also noted for a large number of undocumented features and was much more powerful than B3-34. Through the 1970s the hand-held electronic calculator underwent rapid development. The red LED and blue/green vacuum fluorescent displays consumed a lot of power and the calculators either had a short battery life (often measured in hours, so rechargeable nickel-cadmium batteries were common) or were large so that they could take larger, higher capacity batteries. In the early 1970s liquid crystal displays (LCDs) were in their infancy and there was a great deal of concern that they only had a short operating lifetime. Busicom introduced the Busicom LE-120A "HANDY" calculator, the first pocket-sized calculator and the first with an LED display, and announced the Busicom LC with LCD display. However, there were problems with this display and the calculator never went on sale. The first successful calculators with LCDs were manufactured by Rockwell International and sold from 1972 by other companies under such names as: Dataking LC-800, Harden DT/12, Ibico 086, Lloyds 40, Lloyds 100, Prismatic 500 (aka P500), Rapid Data Rapidman 1208LC. The LCDs were an early form using the Dynamic Scattering Mode DSM with the numbers appearing as bright against a dark background. To present a high-contrast display these models illuminated the LCD using a filament lamp and solid plastic light guide, which negated the low power consumption of the display. These models appear to have been sold only for a year or two. A more successful series of calculators using a reflective DSM-LCD was launched in 1972 by Sharp Inc with the Sharp EL-805, which was a slim pocket calculator. This, and another few similar models, used Sharp's "COS" (Calculator On Substrate) technology. An extension of one glass plate needed for the Liquid Crystal Display was used as a substrate to mount the required chips based on a new hybrid technology. The "COS" technology may have been too expensive since it was only used in a few models before Sharp reverted to conventional circuit boards. In the mid-1970s the first calculators appeared with field-effect, Twisted Nematic TN LCDs with dark numerals against a grey background, though the early ones often had a yellow filter over them to cut out damaging ultraviolet rays. The advantage of LCDs is that they are passive light modulators reflecting light, which require much less power than light-emitting displays such as LEDs or VFDs. This led the way to the first credit-card-sized calculators, such as the Casio Mini Card LC-78 of 1978, which could run for months of normal use on button cells. There were also improvements to the electronics inside the calculators. All of the logic functions of a calculator had been squeezed into the first "Calculator on a chip" integrated circuits in 1971, but this was leading edge technology of the time and yields were low and costs were high. Many calculators continued to use two or more integrated circuits (ICs), especially the scientific and the programmable ones, into the late 1970s. The power consumption of the integrated circuits was also reduced, especially with the introduction of CMOS technology. Appearing in the Sharp "EL-801" in 1972, the transistors in the logic cells of CMOS ICs only used any appreciable power when they changed state. The LED and VFD displays often required additional driver transistors or ICs, whereas the LCD displays were more amenable to being driven directly by the calculator IC itself. With this low power consumption came the possibility of using solar cells as the power source, realised around 1978 by such calculators as the Royal Solar 1, Sharp EL-8026, and Teal Photon. At the beginning of the 1970s hand-held electronic calculators were very expensive, costing two or three weeks' wages, and so were a luxury item. The high price was due to their construction requiring many mechanical and electronic components which were expensive to produce, and production runs were not very large. Many companies saw that there were good profits to be made in the calculator business with the margin on these high prices. However, the cost of calculators fell as components and their production techniques improved, and the effect of economies of scale were felt. By 1976 the cost of the cheapest 4-function pocket calculator had dropped to a few dollars, about one 20th of the cost five years earlier. The consequences of this were that the pocket calculator was affordable, and that it was now difficult for the manufacturers to make a profit out of calculators, leading to many companies dropping out of the business or closing down altogether. The companies that survived making calculators tended to be those with high outputs of higher quality calculators, or producing high-specification scientific and programmable calculators. The first calculator capable of symbolic computation was the HP-28C, released in 1987. It was able to, for example, solve quadratic equations symbolically. The first graphing calculator was the Casio FX-7000G released in 1985. The two leading manufacturers, HP and TI, released increasingly feature-laden calculators during the 1980s and 1990s. At the turn of the millennium, the line between a graphing calculator and a handheld computer was not always clear, as some very advanced calculators such as the TI-89, the Voyage 200 and HP-49G could differentiate and integrate functions, solve differential equations, run word processing and PIM software, and connect by wire or IR to other calculators/computers. The HP 12c financial calculator is still produced. It was introduced in 1981 and is still being made with few changes. The HP 12c featured the reverse Polish notation mode of data entry. In 2003 several new models were released, including an improved version of the HP 12c, the "HP 12c platinum edition" which added more memory, more built-in functions, and the addition of the algebraic mode of data entry. Calculated Industries competed with the HP 12c in the mortgage and real estate markets by differentiating the key labeling; changing the “I”, “PV”, “FV” to easier labeling terms such as "Int", "Term", "Pmt", and not using the reverse Polish notation. However, CI's more successful calculators involved a line of construction calculators, which evolved and expanded in the 1990s to present. According to Mark Bollman, a mathematics and calculator historian and associate professor of mathematics at Albion College, the "Construction Master is the first in a long and profitable line of CI construction calculators" which carried them through the 1980s, 1990s, and to the present. Personal computers often come with a calculator utility program that emulates the appearance and functionality of a calculator, using the graphical user interface to portray a calculator. One such example is Windows Calculator. Most personal data assistants (PDA) and smartphones also have such a feature. These are some of the manufacturers which made a notable contribution to calculator development:

Shape factors are dimensionless quantities used in image analysis and microscopy that numerically describe the shape of a particle, independent of its size. Shape factors are calculated from measured dimensions, such as diameter, chord lengths, area, perimeter, centroid, moments, etc. The dimensions of the particles are usually measured from two-dimensional cross-sections or projections, as in a microscope field, but shape factors also apply to three-dimensional objects. The particles could be the grains in a metallurgical or ceramic microstructure, or the microorganisms in a culture, for example. The dimensionless quantities often represent the degree of deviation from an ideal shape, such as a circle, sphere or equilateral polyhedron. Shape factors are often normalized, that is, the value ranges from zero to one. A shape factor equal to one usually represents an ideal case or maximum symmetry, such as a circle, sphere, square or cube. The normalized aspect ratio varies from approaching zero for a very elongated particle, such as a grain in a cold-worked metal, to near unity for an equiaxed grain. The reciprocal of the right side of the above equation is also used, such that the AR varies from one to approaching infinity. Another very common shape factor is the circularity, a function of the perimeter P and the area A: The circularity of a circle is 1, and much less than one for a starfish footprint. The reciprocal of the circularity equation is also used, such that fcirc varies from one for a circle to infinity. The less-common elongation shape factor is defined as the square root of the ratio of the two second moments in of the particle around its principal axes. The compactness shape factor is a function of the polar second moment in of a particle and a circle of equal area A. The fcomp of a circle is one, and much less than one for the cross-section of an I-beam. The waviness shape factor of the perimeter is a function of the convex portion Pcvx of the perimeter to the total. Some properties of metals and ceramics, such as fracture toughness, have been linked to grain shapes. Greenland, the largest island in the world, has an area of 2,166,086 km2; a coastline (perimeter) of 39,330 km; a north-south length of 2670 km; and an east-west length of 1290 km. The aspect ratio of Greenland is The circularity of Greenland is The aspect ratio is agreeable with an eyeball-estimate on a globe. Such an estimate on a flat map would be less accurate due to the distortion of high-latitude projections. The circularity is deceptively low, due to the fjords that give Greenland a very jagged coastline. A low value of circularity does not necessarily indicate a lack of symmetry! And shape factors are not limited to microscopic objects!

In computing, the focus indicates the component of the graphical user interface which is selected to receive input. Text entered at the keyboard or pasted from a clipboard is sent to the component which has the focus. Moving the focus away from a specific user interface element is known as a blur event in relation to this element. Typically, the focus is withdrawn from an element by giving another element the focus. This means that focus and blur events typically both occur virtually simultaneously, but in relation to different user interface elements, one that gets the focus and one that gets blurred. The concept is similar to a cursor in a text-based environment. However, when considering a graphical interface, there is also a mouse cursor involved. Moving the mouse will typically move the mouse cursor without changing the focus. The focus can usually be changed by clicking on a component that can receive focus with the mouse. Many desktops also allow the focus to be changed with the keyboard. By convention, the tab key is used to move the focus to the next focusable component and shift + tab to the previous one. When graphical interfaces were first introduced, many computers did not have mice, so this alternative was necessary. This feature makes it easier for people that have a hard time using a mouse to use the user interface. In certain circumstances, the arrow keys can also be used to move focus. The behaviour of focus on one's desktop can be governed by policies in window management. On most mainstream user-interfaces, such as ones made by Microsoft and Apple, it is common to find a "focus follows click" policy (or "click to focus"), where one must click the mouse inside of the window for that window to gain focus. This also typically results in the window being raised above all other windows on screen. If a clickfocus model such as this is being used, the current application window continues to retain focus and collect input, even if the mouse pointer is over another application window. Another common policy on UNIX systems using X11 is the "focus follows mouse" policy (or FFM), where the focus automatically follows the current placement of the pointer. The focused window is not necessarily raised; parts of it may remain below other windows. Window managers with this policy usually offer "autoraise," which raises the window when it is focused, typically after a configurable short delay. One consequence of a followfocus policy is that no window has focus when the pointer is moved over the background with no window underneath. The sloppyfocus model is a variant of the followfocus model. It allows input to continue to be collected by the last focused window when the mouse pointer is moved away from any window, such as over a menubar or desktop area. Individual components may also have a cursor position. For instance in a text editing package, the text editing window must have the Focus so that text can be entered. When text is entered into the component, it will appear at the position of the text-cursor, which will also normally be moveable using the mouse cursor. Which component should have the default focus, and how focus should move between components, are difficult but important problems in user interface design. Giving the wrong thing focus means that the user has to waste time moving the focus. Conversely, giving the right thing focus can significantly enhance the user experience.

A formula calculator is a software calculator that can perform a calculation in two steps: 1. Enter the calculation by typing it in from the keyboard. 2. Press a single button or key to see the final result. This is unlike button-operated calculators, such as the Windows calculator or the Mac OS X calculator, which require the user to perform one step for each operation, by pressing buttons to calculate all the intermediate values, before the final result is shown. In this context, a formula is also known as an expression, and so formula calculators may be called expression calculators. Also in this context, calculation is known as evaluation, and so they may be called formula evaluators, rather than calculators. Formulas as they are commonly written use infix notation for binary operators, such as addition, multiplication, division and subtraction. This notation also uses: Also, formulas may contain: Once a formula is entered, a formula calculator follows the above rules to produce the final result by automatically: The formula calculator concept can be applied to all types of calculator, including arithmetic, scientific, statistics, financial and conversion calculators. The calculation can be typed or pasted into an edit box of: • A software package that runs on a computer, for example as a dialog box. • An on-line formula calculator hosted on a web site. It can also be entered on the command line of a programming language. Although they are not calculators in themselves, because they have a much broader feature set, many software tools have a formula-calculation capability, in that a formula can be typed in and evaluated. These include: • Spreadsheets, where a formula can be entered to calculate a cell’s content. • Databases, where a formula can be used to define the value of a calculated field in a record. Button-operated calculators are imperative, because the user must provide details of how the calculation has to be performed. On the other hand, formula calculators are more declarative because the typed-in formula specifies what to do, and the user does not have to provide any details of the step-by-step order in which the calculation has to be performed. Declarative solutions are easier to understand than imperative solutions, and so there has been a long-term trend from imperative to declarative methods. Formula calculators are part of this trend. Many software tools for the general user, such as spreadsheets, are declarative. Formula calculators are examples of such tools. There are hybrid calculators that combine typed-in formula and button-operated calculation. For example: • Calculations can be entered entirely from the keyboard, or operations can be applied to typed-in numbers or formulas using buttons, in the same calculator. • Formulas can be constructed using buttons, rather than being entered from the keyboard. • Formula copies of button-operated calculations can be created, saved and re-loaded for application to different numbers.

In computer science, a selection algorithm is an algorithm for finding the kth smallest number in a list (such a number is called the kth order statistic). This includes the cases of finding the minimum, maximum, and median elements. There are O(n), worst-case linear time, selection algorithms. Selection is a subproblem of more complex problems like the nearest neighbor problem and shortest path problems. Selection can be reduced to sorting by sorting the list and then extracting the desired element. This method is efficient when many selections need to be made from a list, in which case only one initial, expensive sort is needed, followed by many cheap extraction operations. In general, this method requires O(n log n) time, where n is the length of the list (although a lower bound is possible with non-comparative sorting algorithms like radix sort and counting sort). Linear time algorithms to find minima or maxima work by iterating over the list and keeping track of the minimum or maximum element so far. Using the same ideas used in minimum/maximum algorithms, we can construct a simple, but inefficient general algorithm for finding the kth smallest or kth largest item in a list, requiring O(kn) time, which is effective when k is small. To accomplish this, we simply find the most extreme value and move it to the beginning until we reach our desired index. This can be seen as an incomplete selection sort. Here is the minimum-based algorithm: Other advantages of this method are: A general selection algorithm that is efficient in practice, but has poor worst-case performance, was conceived by the inventor of quicksort, C.A.R. Hoare, and is known as Hoare's selection algorithm or quickselect. In quicksort, there is a subprocedure called partition that can, in linear time, group a list (ranging from indices left to right) into two parts, those less than a certain element, and those greater than or equal to the element. Here is pseudocode that performs a partition about the element list[pivotIndex]: In quicksort, we recursively sort both branches, leading to best-case Ω(n log n) time. However, when doing selection, we already know which partition our desired element lies in, since the pivot is in its final sorted position, with all those preceding it in sorted order and all those following it in sorted order. Thus a single recursive call locates the desired element in the correct partition: Note the resemblance to quicksort: just as the minimum-based selection algorithm is a partial selection sort, this is a partial quicksort, generating and partitioning only O(log n) of its O(n) partitions. This simple procedure has expected linear performance, and, like quicksort, has quite good performance in practice. It is also an in-place algorithm, requiring only constant memory overhead, since the tail recursion can be eliminated with a loop like this: Like quicksort, the performance of the algorithm is sensitive to the pivot that is chosen. If bad pivots are consistently chosen, this degrades to the minimum-based selection described previously, and so can require as much as O(n2) time. David Musser describes a "median-of-3 killer" sequence that can force the well-known median-of-three pivot selection algorithm to fail with worst-case behavior (see Introselect section below). A worst-case linear algorithm for the general case of selecting the kth largest element was published by Blum, Floyd, Pratt, Rivest and Tarjan in their 1973 paper "Time bounds for selection", sometimes called BFPRT after the last names of the authors. It is based on the quickselect algorithm and is also known as the median-of-medians algorithm. Although quickselect is linear-time on average, it can require quadratic time with poor pivot choices (consider the case of pivoting around the largest element at each step). The solution to make it O(n) in the worst case is to consistently find "good" pivots. A good pivot is one for which we can establish that a constant proportion of elements fall both below and above it. The Select algorithm divides the list into groups of five elements. (Left over elements are ignored for now.) Then, for each group of five, the median is calculated (an operation that can potentially be made very fast if the five values can be loaded into registers and compared). (If sorting in-place, then these medians are moved into one contiguous block in the list.) Select is then called recursively on this sublist of n/5 elements to find their true median. Finally, the "median of medians" is chosen to be the pivot. The chosen pivot is both less than and greater than half of the elements in the list of medians, which is around n/10 elements (½×n/5) for each half. Each of these elements is a median of 5, making it less than 2 other elements and greater than 2 other elements outside the block. Hence, the pivot is less than 3(n/10) elements outside the block, and greater than another 3(n/10) elements inside the block. Thus the chosen median splits the elements somewhere between 30%/70% and 70%/30%, which assures worst-case linear behavior of the algorithm. To visualize: (red = "(one of the two possible) median of medians", gray = "number < red", white = "number > red") 5-tuples are shown here sorted by median, for clarity. Sorting the tuples is not necessary because we only need the median for use as pivot element. Note that all elements above/left of the red (30% of the 100 elements) are less, and all elements below/right of the red (another 30% of the 100 elements) are greater. The median-calculating recursive call does not exceed worst-case linear behavior because the list of medians is 20% of the size of the list, while the other recursive call recurses on at most 70% of the list, making the running time The O(n) term c n is for the partitioning work (we visited each element a constant number of times, in order to form them into n/5 groups and take each median in O(1) time). From this, using induction, one can easily show that
Although this approach optimizes quite well, it is typically outperformed in practice by the expected linear algorithm with random pivot choices][. The median-of-medians algorithm can be used to construct a worst-case O(n log n) quicksort algorithm, by using it to find the median at every step. David Musser's well-known introsort achieves practical performance comparable to quicksort while preserving O(n log n) worst-case behavior by creating a hybrid of quicksort and heapsort. In the same paper, Musser introduced an "introspective selection" algorithm, popularly called introselect, which combines Hoare's algorithm with the worst-case linear algorithm described above to achieve worst-case linear selection with performance similar to Hoare's algorithm. It works by optimistically starting out with Hoare's algorithm and only switching to the worst-time linear algorithm if it recurses too many times without making sufficient progress. Simply limiting the recursion to constant depth is not good enough, since this would make the algorithm switch on all sufficiently large lists. Musser discusses a couple of simple approaches: Both approaches limit the recursion depth to k ⌈log n⌉ = O(log n) and the total running time to O(n). The paper suggested that more research on introselect was forthcoming, but the author retired in 2007 without having published any such further research. One of the advantages of the sort-and-index approach, as mentioned, is its ability to amortize the sorting cost over many subsequent selections. However, sometimes the number of selections that will be done is not known in advance, and may be either small or large. In these cases, we can adapt the algorithms given above to simultaneously select an element while partially sorting the list, thus accelerating future selections. Both the selection procedure based on minimum-finding and the one based on partitioning can be seen as a form of partial sort. The minimum-based algorithm sorts the list up to the given index, and so clearly speeds up future selections, especially of smaller indexes. The partition-based algorithm does not achieve the same behaviour automatically, but can be adapted to remember its previous pivot choices and reuse them wherever possible, avoiding costly partition operations, particularly the top-level one. The list becomes gradually more sorted as more partition operations are done incrementally; no pivots are ever "lost". If desired, this same pivot list could be passed on to quicksort to reuse, again avoiding many costly partition operations. Given an unorganized list of data, linear time (Ω(n)) is required to find the minimum element, because we have to examine every element (otherwise, we might miss it). If we organize the list, for example by keeping it sorted at all times, then selecting the kth largest element is trivial, but then insertion requires linear time, as do other operations such as combining two lists. The strategy to find an order statistic in sublinear time is to store the data in an organized fashion using suitable data structures that facilitate the selection. Two such data structures are tree-based structures and frequency tables. When only the minimum (or maximum) is needed, a good approach is to use a heap, which is able to find the minimum (or maximum) element in constant time, while all other operations, including insertion, are O(log n) or better. More generally, a self-balancing binary search tree can easily be augmented to make it possible to both insert an element and find the kth largest element in O(log n) time. We simply store in each node a count of how many descendants it has, and use this to determine which path to follow. The information can be updated efficiently since adding a node only affects the counts of its O(log n) ancestors, and tree rotations only affect the counts of the nodes involved in the rotation. Another simple strategy is based on some of the same concepts as the hash table. When we know the range of values beforehand, we can divide that range into h subintervals and assign these to h buckets. When we insert an element, we add it to the bucket corresponding to the interval it falls in. To find the minimum or maximum element, we scan from the beginning or end for the first nonempty bucket and find the minimum or maximum element in that bucket. In general, to find the kth element, we maintain a count of the number of elements in each bucket, then scan the buckets from left to right adding up counts until we find the bucket containing the desired element, then use the expected linear-time algorithm to find the correct element in that bucket. If we choose h of size roughly sqrt(n), and the input is close to uniformly distributed, this scheme can perform selections in expected O(sqrt(n)) time. Unfortunately, this strategy is also sensitive to clustering of elements in a narrow interval, which may result in buckets with large numbers of elements (clustering can be eliminated through a good hash function, but finding the element with the kth largest hash value isn't very useful). Additionally, like hash tables this structure requires table resizings to maintain efficiency as elements are added and n becomes much larger than h2. A useful case of this is finding an order statistic or extremum in a finite range of data. Using above table with bucket interval 1 and maintaining counts in each bucket is much superior to other methods. Such hash tables are like frequency tables used to classify the data in descriptive statistics. Another fundamental selection problem is that of selecting the k smallest or k largest elements, which is particularly useful where we want to present just the "top k" of an unsorted list, such as the top 100 corporations by gross sales. This is also commonly called partial sorting. In The Art of Computer Programming, Donald E. Knuth discussed a number of lower bounds for the number of comparisons required to locate the t smallest entries of an unorganized list of n items (using only comparisons). There is a trivial lower bound of n − 1 for the minimum or maximum entry. To see this, consider a tournament where each game represents one comparison. Since every player except the winner of the tournament must lose a game before we know the winner, we have a lower bound of n − 1 comparisons. The story becomes more complex for other indexes. We define $W_{t}(n)$ as the minimum number of comparisons required to find the t smallest values. Knuth references a paper published by S. S. Kislitsyn, which shows an upper bound on this value: This bound is achievable for t=2 but better, more complex bounds are known for larger t. Very few languages have built-in support for general selection, although many provide facilities for finding the smallest or largest element of a list. A notable exception is C++, which provides a templated nth_element method with a guarantee of expected linear time. It is implied but not required that it is based on Hoare's algorithm by its requirement of expected linear time. (Ref section 25.3.2 of ISO/IEC 14882:2003(E) and 14882:1998(E), see also SGI STL description of nth_element) C++ also provides the partial_sort algorithm, which solves the problem of selecting the smallest k elements (sorted), with a time complexity of O(n log k). No algorithm is provided for selecting the greatest k elements since this should be done by inverting the ordering predicate. For Perl, the module Sort::Key::Top, available from CPAN, provides a set of functions to select the top n elements from a list using several orderings and custom key extraction procedures. Furthermore, the Statistics::CaseResampling module provides a function to calculate quantiles using quickselect. Python's standard library (since 2.4) includes heapq.nsmallest() and nlargest(), returning sorted lists, the former in O(n + k log n) time, the latter in O(n log k) time. Because language support for sorting is more ubiquitous, the simplistic approach of sorting followed by indexing is preferred in many environments despite its disadvantage in speed. Indeed for lazy languages, this simplistic approach can even achieve the best complexity possible for the k smallest/greatest sorted (with maximum/minimum as a special case) if the sort is lazy enough. In certain selection problems, selection must be online, that is, an element can only be selected from a sequential input at the instance of observation and each selection, respectively refusal, is irrevocable. The problem is to select, under these constraints, a specific element of the input sequence (as for example the largest or the smallest value) with largest probability. This problem can be tackled by the Odds algorithm designed by F. Thomas Bruss who coined the name Odds algorithm. It is also known as Bruss-algorithm or Bruss-strategy. This algorithm yields the optimal under an independence condition; it is also optimal itself as an algorithm with the number of computations being linear in the length of input.
Software

Microsoft Excel is a spreadsheet application developed by Microsoft for Microsoft Windows and Mac OS. It features calculation, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications. It has been a very widely applied spreadsheet for these platforms, especially since version 5 in 1993, and it has replaced Lotus 1-2-3 as the industry standard for spreadsheets. Excel forms part of Microsoft Office.

Numbers Technology Internet

25