Interference (wave propagation): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Jmencisom
No edit summary
en>Materialscientist
Reverted 1 good faith edit by 113.21.79.252 using STiki
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
'''IEEE 754-1985''' was an industry [[technical standard|standard]] for representing [[floating-point]] numbers in [[computers]], officially adopted in 1985 and superseded in 2008 by [[IEEE 754-2008]]. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point [[library (computing)|libraries]], and in hardware, in the [[instruction (computer science)|instructions]] of many [[CPU]]s and [[floating-point unit|FPUs]]. The first [[integrated circuit]] to implement the draft of what was to become IEEE 754-1985 was the [[Intel 8087]].
Japan is more often than not believed of as a particularly formal, polite society. Whilst this is true in modern instances, the Japanese feudal periods have been something but.<br><br>The Japanese Feudal Periods<br><br>The feudal periods in Japan took location from the 12th by means of 19th centuries, and it marked an crucial period in the country&quot;s history. The rule of Japan by regional households and clans, as properly as by the shogun (war lords) created a numerous sort of culture marked by a lower in the power of the emperor as nicely as indifference in the ruling class. This component of history can be sorted into periods named for the ruling shogun families or shogunates.<br><br>The first time period began with the Kamakura Period, which began in 1192. In the course of the reign of the Kamakura Shogunate, an invasion by the Mongols took place in which the Japanese were ultimately in a position to repel the invaders. The complications the Mongol invasion triggered ultimately led to the finish of the line for the Kamakura Shogunate, which lost its reign in 1333.<br><br>At that time, the Japanese Middle Ages started, lasting through the subsequent ruling family in the Muromachi Period. We learned about [http://www.top5seo.co.uk/dont-buy-link-emperor/ link emperor] by browsing webpages. Through the later years of this period, around 1542, a Portuguese ship ran aground on Japan&quot;s shores. This ship was carrying firearms, and firearm technologies was introduced to Japan. This also led the way to other traders from Portugal and other European nations coming to Japan, and Christianity was also brought into the area at that time.<br><br>The next period in Japan was the Azuchi-Momoyama Period, which lasted from 1568 to 1600. Dig up supplementary info on this affiliated encyclopedia - Browse this link: [http://www.top5seo.co.uk/dont-buy-link-emperor/ link emperor review]. In the course of this short time, there was a reunification of the military and ruling parties of Japan. The unity was gained by a popular purpose to attack and defeat China. Alas, the united work failed. By 1598, the Japanese have been repelled back to the islands from China. With the defeat, the unity dissolved and a new period began.<br><br>Ultimately, the Edo Period lasted from 1800 to 1868. This was a extremely crucial component of the Japanese time line, as this is when considerably of the artistic developments of the [http://search.huffingtonpost.com/search?q=nation+occurred&s_it=header_form_v1 nation occurred]. In the event you choose to dig up further about [http://www.top5seo.co.uk/dont-buy-link-emperor/ linkemperor], there are many databases you might consider pursuing. It is also the period when the samurai honestly came to the forefront of culture and politics, being placed in status high above other commoners. The Edo Period was the final period marked by a ruling shogunate in the feudal age of Japan. In roughly 1870, the consumers rallied about the Emperor and the age of household rule came to an end.<br><br>The Japanese feudal periods played an fundamental part in shaping the culture and government of the country. Although it ended loads of years ago, some of the artistic and cultural traditions started in the course of then are nonetheless in practice today. The Edo Period, possibly the most significant of the feudal occasions, brought art and theater to the masses, and they are nevertheless very necessary at present..<br><br>If you liked this post and you would like to obtain extra info about health insurace ([http://righteouspraise67.wordpress.com http://righteouspraise67.wordpress.com]) kindly check out our webpage.
 
IEEE 754-1985 represents numbers in [[binary numeral system|binary]], providing definitions for four levels of precision, of which the two most commonly used are:
 
{| class="wikitable"
|-
!level
!width
!range
!precision*
|-
|single precision
|32 bits
|±1.18{{e|-38}} to ±3.4{{e|38}}
|approx. 7 decimal digits
|-
|double precision
|64 bits
|±2.23{{e|-308}} to ±1.80{{e|308}}
|approx. 15 decimal digits
|}
* Precision: The number of decimal digits precision is calculated via number_of_mantissa_bits * Log<sub>10</sub>(2). Thus ~7.2 and ~15.9 for single and double precision respectively.
 
The standard also defines representations for positive and negative [[infinity]], a "[[negative zero]]", five exceptions to handle invalid results like [[division by zero]], special values called [[NaN]]s for representing those exceptions, [[denormal numbers]] to represent numbers smaller than shown above, and four [[rounding]] modes.
 
== Representation of numbers ==
 
[[Image:IEEE 754 Single Floating Point Format.svg|right|frame|The number 0.15625 represented as a single-precision IEEE 754-1985 floating-point number. See text for explanation.]]
[[Image:IEEE 754 Double Floating Point Format.svg|right|frame|The three fields in a 64bit IEEE 754 float]]
 
Floating-point numbers in IEEE&nbsp;754 format consist of three fields: a [[sign bit]], a [[exponent bias|biased exponent]], and a fraction. The following example illustrates the meaning of each.
 
The decimal number 0.15625<sub>10</sub> represented in binary is 0.00101<sub>2</sub> (that is, 1/8 + 1/32). (Subscripts indicate the number [[radix|base]].) Analogous to [[scientific notation]], where numbers are written to have a single non-zero digit to the left of the decimal point, we rewrite this number so it has a single 1 bit to the left of the "binary point". We simply multiply by the appropriate power of 2 to compensate for shifting the bits left by three positions:
 
: <math>0.00101_2 = 1.01_2 \times 2^{-3}</math>
 
Now we can read off the fraction and the exponent: the fraction is .01<sub>2</sub> and the exponent is −3.
 
As illustrated in the pictures, the three fields in the IEEE&nbsp;754 representation of this number are:
 
: ''sign'' = 0, because the number is positive. (1 indicates negative.)
: ''biased exponent'' = −3 + the "bias". In single precision, the bias is 127, so in this example the biased exponent is 124; in double precision, the bias is 1023, so the biased exponent in this example is 1020.
: ''fraction'' = .01000…<sub>2</sub>.
 
IEEE&nbsp;754 adds a [[offset binary|bias]] to the exponent so that numbers can in many cases be compared conveniently by the same hardware that compares signed [[2's-complement]] integers. Using a biased exponent, the lesser of two positive floating-point numbers will come out "less than" the greater following the same ordering as for [[sign and magnitude]] integers. If two floating-point numbers have different signs, the sign-and-magnitude comparison also works with biased exponents. However, if both biased-exponent floating-point numbers are negative, then the ordering must be reversed. If the exponent were represented as, say, a 2's-complement number, comparison to see which of two numbers is greater would not be as convenient.
 
The leading 1 bit is omitted since all numbers except zero start with a leading 1; the leading 1 is implicit and doesn't actually need to be stored which gives an extra bit of precision for "free."
 
=== Zero ===
 
The number zero is represented specially:
 
: ''sign'' = 0 for [[signed zero|positive zero]], 1 for [[signed zero|negative zero]].
: ''biased exponent'' = 0.
: ''fraction'' = 0.
 
=== Denormalized numbers ===
 
The number representations described above are called ''normalized,'' meaning that the implicit leading binary digit is a 1. To reduce the loss of precision when an [[Arithmetic underflow|underflow]] occurs, IEEE&nbsp;754 includes the ability to represent fractions smaller than are possible in the normalized representation, by making the implicit leading digit a 0. Such numbers are called [[denormal numbers|denormal]]. They don't include as many [[significant digits]] as a normalized number, but they enable a gradual loss of precision when the result of an [[Floating-point arithmetic#Floating-point arithmetic operations|arithmetic operation]] is not exactly zero but is too close to zero to be represented by a normalized number.
 
A denormal number is represented with a biased exponent of all 0 bits, which represents an exponent of −126 in single precision (not −127), or −1022 in double precision (not −1023).<ref>{{cite book|last=Hennessy|title=Computer Organization and Design|publisher=Morgan Kaufmann|page=270}}</ref> In contrast, the smallest biased exponent representing a normal number is 1 (see [[#Examples|examples]] below).
 
== Representation of non-numbers ==
 
The biased-exponent field is filled with all 1 bits to indicate either infinity or an invalid result of a computation.
 
=== Positive and negative infinity ===
 
[[Extended real line|Positive and negative infinity]] are represented thus:
 
: ''sign'' = 0 for positive infinity, 1 for negative infinity.
: ''biased exponent'' = all 1 bits.
: ''fraction'' = all 0 bits.
 
=== NaN ===
 
Some operations of [[Floating-point arithmetic#Floating-point arithmetic operations|floating-point arithmetic]] are invalid, such as [[division by zero|dividing by zero]] or taking the square root of a negative number. The act of reaching an invalid result is called a floating-point ''exception.'' An exceptional result is represented by a special code called a NaN, for "[[Not a Number]]". All NaNs in IEEE&nbsp;754-1985 have this format:
 
: ''sign'' = either 0 or 1.
: ''biased exponent'' = all 1 bits.
: ''fraction'' = anything except all 0 bits (since all 0 bits represents infinity).
 
== Range and precision ==
 
Precision is defined as the minimum difference between two successive mantissa representations; thus it is a function only in the mantissa; while the gap is defined as the difference between two successive numbers.<ref>Computer Arithmetic; Hossam A. H. Fahmy, Shlomo Waser, and Michael J. Flynn; http://arith.stanford.edu/~hfahmy/webpages/arith_class/arith.pdf</ref>
 
=== Single precision ===
 
Single-precision numbers occupy 32 bits. In single precision:
* The positive and negative numbers closest to zero (represented by the denormalized value with all 0s in the exponent field and the binary value 1 in the fraction field) are
*: ±2<sup>&minus;149</sup> ≈ ±1.4012985{{e|−45}}
* The positive and negative normalized numbers closest to zero (represented with the binary value 1 in the exponent field and 0 in the fraction field) are
*: ±2<sup>&minus;126</sup> ≈ ±1.175494351{{e|−38}}
* The finite positive and finite negative numbers furthest from zero (represented by the value with 254 in the exponent field and all 1s in the fraction field) are
*: ±(1−2<sup>−24</sup>)&times;2<sup>128</sup><ref name="Kahan">{{Cite journal
  | author = William Kahan |author-link=William Kahan
  | title = Lecture Notes on the Status of IEEE 754
  | version =  October 1, 1997 3:36 am
  | publisher = Elect. Eng. & Computer Science University of California
  | url = http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
  | format = PDF
  | accessdate = 2007-04-12 }}</ref> ≈ ±3.402823466{{e|38}}
 
Some example range and gap values for given exponents in single precision:
 
{| class="wikitable" style="text-align:right;"
|- style="text-align:center;"
! Actual Exponent (unbiased)
! Exp (biased)
! Minimum
! Maximum
! Gap
|-
| 0
| 127
| 1
| 1.999999880791
| 1.19209289551e−7
|-
| 1
| 128
| 2
| 3.99999976158
| 2.38418579102e−7
|-
| 2
| 129
| 4
| 7.99999952316
| 4.76837158203e−7
|-
| 10
| 137
| 1024
| 2047.99987793
| 1.220703125e−4
|-
| 11
| 138
| 2048
| 4095.99975586
| 2.44140625e−4
|-
| 23
| 150
| 8388608
| 16777215
| 1
|-
| 24
| 151
| 16777216
| 33554430
| 2
|-
| 127
| 254
| 1.7014e38
| 3.4028e38
| 2.02824096037e31
|}
 
As an example, 16,777,217 can not be encoded as a 32-bit float as it will be rounded to 16,777,216. This shows why floating point arithmetic is unsuitable for accounting software.  However, all integers within the representable range that are a power of 2 can be stored in a 32-bit float without rounding.
 
=== Double precision ===
 
Double-precision numbers occupy 64 bits. In double precision:
* The positive and negative numbers closest to zero (represented by the denormalized value with all 0s in the Exp field and the binary value 1 in the Fraction field) are
*: ±2<sup>&minus;1074</sup> ≈ ±5{{e|−324}}
* The positive and negative normalized numbers closest to zero (represented with the binary value 1 in the Exp field and 0 in the fraction field) are
*: ±2<sup>&minus;1022</sup> ≈ ±2.2250738585072014{{e|−308}}
* The finite positive and finite negative numbers furthest from zero (represented by the value with 2046 in the Exp field and all 1s in the fraction field) are
*: ±((1−(1/2)<sup>53</sup>)2<sup>1024</sup>)<ref name="Kahan" />  ≈ ±1.7976931348623157{{e|308}}
 
Some example range and gap values for given exponents in double precision:
 
{| class="wikitable" style="text-align:right;"
|- style="text-align:center;"
! Actual Exponent (unbiased)
! Exp (biased)
! Minimum
! Maximum
! Gap
|-
| 0
| 1023
| 1
| 1.9999999999999997
| 2.2204460492503130808472633e−16
|-
| 1
| 1024
| 2
| 3.9999999999999995
| 8.8817841970012523233890533447266e−16
|-
| 2
| 1025
| 4
| 7.9999999999999990
| 3.5527136788005009293556213378906e−15
|-
| 10
| 1033
| 1024
| 2047.9999999999997
| 2.27373675443232059478759765625e−13
|-
| 11
| 1034
| 2048
| 4095.9999999999995
| 4.5474735088646411895751953125e−13
|-
| 52
| 1075
| 4503599627370496
| 9007199254740991
| 1
|-
| 53
| 1076
| 9007199254740992
| 18014398509481982
| 2
|-
| 1023
| 2046
| 8.9884656743115800e307
| 1.7976931348623157e308
| 1.9958403095347198116563727130368e292
|}
 
=== Extended formats ===
 
The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors: the standard only specifies minimum precision and exponent requirements for such formats. The [[x87]] [[extended precision|80-bit extended format]] is the most commonly implemented extended format that meets these requirements.
 
== Examples ==
 
Here are some examples of single-precision IEEE 754 representations:
 
{| class="wikitable"
|-
! Type
! Sign
! Actual Exponent
! Exp (biased)
! Exponent field
! Significand (fraction field)
! Value
|-
| Zero
| style="text-align:center;"| 0
| −127
| style="text-align:right;"| 0
| 0000 0000
| 000 0000 0000 0000 0000 0000
| 0.0
|-
| [[Negative zero]]
| style="text-align:center;"| 1
| −127
| style="text-align:right;"| 0
| 0000 0000
| 000 0000 0000 0000 0000 0000
| &minus;0.0
|-
| One
| style="text-align:center;"| 0
| 0
| style="text-align:right;"| 127
| 0111 1111
| 000 0000 0000 0000 0000 0000
| 1.0
|-
| Minus One
| style="text-align:center;"| 1
| 0
| style="text-align:right;"| 127
| 0111 1111
| 000 0000 0000 0000 0000 0000
| &minus;1.0
|-
| Smallest [[Denormal number|denormalized number]]
| style="text-align:center;"| *
| −127
| style="text-align:right;"| 0
| 0000 0000
| 000 0000 0000 0000 0000 0001
| ±2<sup>&minus;23</sup> &times; 2<sup>&minus;126</sup> = ±2<sup>&minus;149</sup> ≈ ±1.4{{e|-45}}
|-
| "Middle" denormalized number
| style="text-align:center;"| *
| −127
| style="text-align:right;"| 0
| 0000 0000
| 100 0000 0000 0000 0000 0000
| ±2<sup>&minus;1</sup> &times; 2<sup>&minus;126</sup> = ±2<sup>&minus;127</sup> ≈ ±5.88{{e|-39}}
|-
| Largest denormalized number
| style="text-align:center;"| *
| −127
| style="text-align:right;"| 0
| 0000 0000
| 111 1111 1111 1111 1111 1111
| ±(1−2<sup>−23</sup>) &times; 2<sup>−126</sup> ≈ ±1.18{{e|-38}}
|-
| Smallest normalized number
| style="text-align:center;"| *
| −126
| style="text-align:right;"| 1
| 0000 0001
| 000 0000 0000 0000 0000 0000
| ±2<sup>−126</sup> ≈ ±1.18{{e|-38}}
|-
| Largest normalized number
| style="text-align:center;"| *
| 127
| style="text-align:right;"| 254
| 1111 1110
| 111 1111 1111 1111 1111 1111
| ±(2&minus;2<sup>−23</sup>) &times; 2<sup>127</sup> ≈ ±3.4{{e|38}}
|-
| Positive infinity
| style="text-align:center;"| 0
| 128
| style="text-align:right;"| 255
| 1111 1111
| 000 0000 0000 0000 0000 0000
| +∞
|-
| Negative infinity
| style="text-align:center;"| 1
| 128
| style="text-align:right;"| 255
| 1111 1111
| 000 0000 0000 0000 0000 0000
| −∞
|-
| [[Not a number]]
| style="text-align:center;"| *
| 128
| style="text-align:right;"| 255
| 1111 1111
| non zero
| NaN
|-
| colspan="7" | *  Sign bit can be either 0  or 1&nbsp;.
|}
 
==Comparing floating-point numbers==
Every possible bit combination is either a NaN or a number with a unique value in the [[affinely extended real number system]] with its associated order, except for the two bit combinations negative zero and positive zero, which sometimes require special attention (see below). The binary representation has the special property that, excluding NaNs, any two numbers can be compared like [[sign and magnitude]] integers (although with modern computer processors this is no longer directly applicable): if the sign bit is different, the negative number precedes the positive number (except that negative zero and positive zero should be considered equal), otherwise, relative order is the same as [[lexicographical order]] but inverted for two negative numbers; [[endianness]] issues apply.
 
Floating-point arithmetic is subject to rounding that may affect the outcome of comparisons on the results of the computations.
 
Although negative zero and positive zero are generally considered equal for comparison purposes, some [[programming language]] [[relational operator]]s and similar constructs might or do treat them as distinct. According to the [[Java (programming language)|Java]] Language Specification,<ref>[http://java.sun.com/docs/books/jls/ The Java Language Specification]</ref> comparison and equality operators treat them as equal, but Math.min() and Math.max() distinguish them (officially starting with Java version 1.1 but actually with 1.1.1), as do the comparison methods equals(), compareTo() and even compare() of classes Float and Double.
 
==Rounding floating-point numbers==
The IEEE standard has four different rounding modes; the first is the default; the others are called ''[[directed rounding]]s''.
 
* '''Round to Nearest''' &ndash; rounds to the nearest value; if the number falls midway it is rounded to the nearest value with an even (zero) least significant bit, which occurs 50% of the time (in [[IEEE 754-2008]] this mode is called ''roundTiesToEven'' to distinguish it from another round-to-nearest mode)
* '''Round toward 0''' &ndash; directed rounding towards zero
* '''Round toward +∞''' &ndash; directed rounding towards positive infinity
* '''Round toward −∞''' &ndash; directed rounding towards negative infinity.
 
==Extending the real numbers==
The IEEE standard employs (and extends) the [[affinely extended real number system]], with separate positive and negative infinities. During drafting, there was a proposal for the standard to incorporate the [[projectively extended real number system]], with a single unsigned infinity, by providing programmers with a mode selection option. In the interest of reducing the complexity of the final standard, the projective mode was dropped, however. The [[Intel 8087]] and [[Intel 80287]] [[floating point]] [[co-processor]]s both support this projective mode.<ref>{{cite journal|journal=ACM Transactions on Programming Languages and Systems|volume=18|issue=2|date=March 1996|format=PDF|url=http://www.jhauser.us/publications/1996_Hauser_FloatingPointExceptions.html|author=John R. Hauser|title=Handling Floating-Point Exceptions in Numeric Programs}}</ref><ref>{{cite journal|title=IEEE Task P754: A proposed standard for binary floating-point arithmetic|date=March 1981|journal=IEEE Computer|volume=14|issue=3|pages=51–62|author=David Stevenson}}</ref><ref>{{cite journal|author= William Kahan and John Palmer|year=1979|title=On a proposed floating-point standard|journal=SIGNUM Newsletter|volume=14|issue=Special|pages=13–21|doi= 10.1145/1057520.1057522}}</ref>
 
==Functions and predicates==
 
===Standard operations===
The following functions must be provided:
*[[arithmetic operations|Add, subtract, multiply, divide]]
*[[Square root]]
*Floating point remainder. This is not like a normal [[modulo operation]], it can be negative for two positive numbers. It returns the exact value of {{math|x–(round(x/y)·y)}}.
*[[rounding to integer|Round to nearest integer]]. For undirected rounding when halfway between two integers the even integer is chosen.
*Comparison operations. Besides the more obvious results, IEEE&nbsp;754 defines that −∞&nbsp;=&nbsp;−∞, +∞&nbsp;=&nbsp;+∞ and <var>x</var>&nbsp;≠&nbsp;<code>NaN</code> for any <var>x</var> (including <code>NaN</code>).
 
===Recommended functions and predicates===
* Under some C compilers, <code>copysign(x,y)</code> returns x with the sign of y, so <code>abs(x)</code> equals <code>copysign(x,1.0)</code>. This is one of the few operations which operates on a NaN in a way resembling arithmetic. The function <code>copysign</code> is new in the C99 standard.
* &minus;x returns x with the sign reversed. This is different from 0&minus;x in some cases, notably when x is 0. So &minus;(0) is &minus;0, but the sign of 0&minus;0 depends on the rounding mode.
* <code>scalb(y, N)</code>
* <code>logb(x)</code>
* <code>finite(x)</code> a [[Predicate (mathematics)|predicate]] for "x is a finite value", equivalent to &minus;Inf < x < Inf
* <code>isnan(x)</code> a predicate for "x is a NaN", equivalent to "x ≠ x"
* <code>x <> y</code> which turns out to have different exception behavior than NOT(x = y).
* <code>unordered(x, y)</code> is true when "x is unordered with y", i.e., either x or y is a NaN.
* <code>class(x)</code>
* <code>nextafter(x,y)</code> returns the next representable value from x in the direction towards y
 
==History==
In 1976 Intel began planning to produce a floating point coprocessor. Dr John Palmer, the manager of the effort, persuaded them that they should try to develop a standard for all their floating point operations. [[William Kahan]] was hired as a consultant; he had helped improve the accuracy of Hewlett Packard's calculators.  Kahan initially recommended that the floating point base be decimal<ref>W. Kahan 2003, pers. comm. to [[Mike Cowlishaw]] and others after an IEEE 754 meeting</ref> but the hardware design of the coprocessor was too far advanced to make that change.  
 
The work within Intel worried other vendors, who set up a standardization effort to ensure a 'level playing field'. Kahan attended the second IEEE 754 standards working group meeting, held in November 1977. Here, he received permission from Intel to put forward a draft proposal based on the standard arithmetic part of their design for a coprocessor. The arguments over gradual underflow lasted until 1981 when an expert commissioned by DEC to assess it sided against the dissenters.
 
Even before it was approved, the draft standard had been implemented by a number of manufacturers.<ref>{{cite web|url=http://www.eecs.berkeley.edu/~wkahan/ieee754status/754story.html|title=An Interview with the Old Man of Floating-Point| author=Charles Severance |author-link=Charles Severance |date=20 February 1998}}</ref><ref>{{cite web|publisher=Connexions |url=http://cnx.org/content/m32770/latest/ |title=History of IEEE Floating-Point Format|author=Charles Severance |author-link=Charles Severance}}</ref> The [[Intel 8087]], which was announced in 1980, was the first chip to implement the draft standard.
 
==See also==
* [[IEEE 754-2008]]
* [[&minus;0 (number)|&minus;0]] (negative zero)
* [[Intel 8087]]
* [[minifloat]] for simple examples of properties of IEEE 754 floating point numbers
* [[Fixed-point arithmetic]]
 
==References==
{{reflist}}
 
== Further reading==
*{{cite journal
|    author = Charles Severance |author-link=Charles Severance
|      title = IEEE 754: An Interview with William Kahan
|    journal = [[IEEE Computer]]
|date=March 1998
|    volume = 31
|      issue = 3
|      pages = 114–115
|        doi = 10.1109/MC.1998.660194
|        url = http://www.freecollab.com/dr-chuck/papers/columns/r3114.pdf
| accessdate = 2008-04-28
|    format = PDF
|      quote = 
}}
* {{cite journal
|    author = David Goldberg
|      title = What Every Computer Scientist Should Know About Floating-Point Arithmetic
|    journal = [[ACM Computing Surveys]]
|date=March 1991
|    volume = 23
|      issue = 1
|      pages = 5–48
|        doi = 10.1145/103162.103163
|        url = http://www.validlab.com/goldberg/paper.pdf
| accessdate = 2008-04-28
|    format = PDF
|      quote = 
}}
* {{cite journal
|    author = Chris Hecker
|      title = Let's Get To The (Floating) Point
|    journal = Game Developer Magazine
|date=February 1996
|      pages = 19–24
|      issn = 1073-922X
|        url = http://www.d6.com/users/checker/pdfs/gdmfp.pdf
|format=PDF| accessdate =
|      quote = 
}}
* {{cite journal
|    author = David Monniaux
|      title = The pitfalls of verifying floating-point computations
|    journal = [[ACM Transactions on Programming Languages and Systems]]
|date=May 2008
|      pages = article #12
|    volume = 30
|      issue = 3
|        doi = 10.1145/1353445.1353446
|      issn = 0164-0925
|        url = http://hal.archives-ouvertes.fr/hal-00128124/en/}}: A compendium of non-intuitive behaviours of floating-point on popular architectures, with implications for program verification and testing.
 
==External links==
* [http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm Comparing floats]
* [http://www.coprocessor.info/ Coprocessor.info: x87 FPU pictures, development and manufacturer information]
* [http://babbage.cs.qc.edu/courses/cs341/IEEE-754references.html IEEE 754 references]
* [http://speleotrove.com/decimal/854mins.html IEEE 854-1987] &mdash; History and minutes
* [http://babbage.cs.qc.edu/IEEE-754/ Online IEEE 754 Calculators]
* [http://www.binaryconvert.com/convert_float.html IEEE754 (Single and Double precision) Online Converter]
 
{{IEEE standards}}
 
{{DEFAULTSORT:Ieee 754-1985}}
[[Category:Computer arithmetic]]
[[Category:IEEE standards]]

Latest revision as of 10:11, 14 December 2014

Japan is more often than not believed of as a particularly formal, polite society. Whilst this is true in modern instances, the Japanese feudal periods have been something but.

The Japanese Feudal Periods

The feudal periods in Japan took location from the 12th by means of 19th centuries, and it marked an crucial period in the country"s history. The rule of Japan by regional households and clans, as properly as by the shogun (war lords) created a numerous sort of culture marked by a lower in the power of the emperor as nicely as indifference in the ruling class. This component of history can be sorted into periods named for the ruling shogun families or shogunates.

The first time period began with the Kamakura Period, which began in 1192. In the course of the reign of the Kamakura Shogunate, an invasion by the Mongols took place in which the Japanese were ultimately in a position to repel the invaders. The complications the Mongol invasion triggered ultimately led to the finish of the line for the Kamakura Shogunate, which lost its reign in 1333.

At that time, the Japanese Middle Ages started, lasting through the subsequent ruling family in the Muromachi Period. We learned about link emperor by browsing webpages. Through the later years of this period, around 1542, a Portuguese ship ran aground on Japan"s shores. This ship was carrying firearms, and firearm technologies was introduced to Japan. This also led the way to other traders from Portugal and other European nations coming to Japan, and Christianity was also brought into the area at that time.

The next period in Japan was the Azuchi-Momoyama Period, which lasted from 1568 to 1600. Dig up supplementary info on this affiliated encyclopedia - Browse this link: link emperor review. In the course of this short time, there was a reunification of the military and ruling parties of Japan. The unity was gained by a popular purpose to attack and defeat China. Alas, the united work failed. By 1598, the Japanese have been repelled back to the islands from China. With the defeat, the unity dissolved and a new period began.

Ultimately, the Edo Period lasted from 1800 to 1868. This was a extremely crucial component of the Japanese time line, as this is when considerably of the artistic developments of the nation occurred. In the event you choose to dig up further about linkemperor, there are many databases you might consider pursuing. It is also the period when the samurai honestly came to the forefront of culture and politics, being placed in status high above other commoners. The Edo Period was the final period marked by a ruling shogunate in the feudal age of Japan. In roughly 1870, the consumers rallied about the Emperor and the age of household rule came to an end.

The Japanese feudal periods played an fundamental part in shaping the culture and government of the country. Although it ended loads of years ago, some of the artistic and cultural traditions started in the course of then are nonetheless in practice today. The Edo Period, possibly the most significant of the feudal occasions, brought art and theater to the masses, and they are nevertheless very necessary at present..

If you liked this post and you would like to obtain extra info about health insurace (http://righteouspraise67.wordpress.com) kindly check out our webpage.