Morris mano computer architecture rapidshare




















They are used generally with the arithmetic, logic, and other data-processing operations. The contents of a register can be shifted to the left or the right. During a shift-right operation the serial input transfers a bit into the leftmost position. The serial input transfers a bit into the rightmost position during a shift-left operation. There are three types of shifts, logical, circular and arithmetic. Logical shift A logical shift operation transfers 0 through the serial input.

We use the symbols shl and shr for logical shift left and shift right microoperations, e. Circular shift The circular shift is also known as rotate operation. It circulates the bits of the register around the two ends and there is no loss of information. This is accomplished by connecting the serial output of the shift register to its serial input. We use the symbols cil and cir for the circular shift left and circular shift right.

Arithmetic Shift An arithmetic shift micro operation shifts a signed binary number to the left or right. The effect of an arithmetic shift left operation is to multiply the binary number by 2. Similarly an arithmetic shift right divides the number by 2. Because the sign of the number must remain the same arithmetic shift-right must leave the sign bit unchanged, when it is multiplied or divided by 2.

The left most bit in a register holds the sign bit, and the remaining bits hold the number. The sign bit is 0 for positive and 1 for negative. Following figure shows a typical register of n bits. Rn-2 is the most significant bit of the number and R0 is the least significant bit. The arithmetic shift-right leaves the sign bit unchanged and shifts the number including the sign bits to the right.

Thus Rn-1 remains the same, Rn-2 receives the bit from Rn-1, and so on for other bits in the register. Data is manipulated to produce results necessary to give solution for the computation problems.

The Addition, subtraction, multiplication and division are the four basic arithmetic operations. If we want then we can derive other operations by using these four operations. To execute arithmetic operations there is a separate section called arithmetic processing unit in central processing unit. The arithmetic instructions are performed generally on binary or decimal data.

Fixed-point numbers are used to represent integers or fractions. We can have signed or unsigned negative numbers. Fixed-point addition is the simplest arithmetic operation. If we want to solve a problem then we use a sequence of well-defined steps. These steps are collectively called algorithm. To solve various problems we give algorithms. In order to solve the computational problems, arithmetic instructions are used in digital computers that manipulate data.

These instructions perform arithmetic calculations. Paper Name: Computer Organization and Architecture And these instructions perform a great activity in processing data in a digital computer. As we already stated that with the four basic arithmetic operations addition, subtraction, multiplication and division, it is possible to derive other arithmetic operations and solve scientific problems by means of numerical analysis methods.

A processor has an arithmetic processor as a sub part of it that executes arithmetic operations. The data type, assumed to reside in processor, registers during the execution of an arithmetic instruction. Negative numbers may be in a signed magnitude or signed complement representation. Most computers use the signed magnitude representation for the mantissa. Where the signed numbers are added or subtracted, we find that there are eight different conditions to consider, depending on the sign of the numbers and the operation performed.

These conditions are listed in the first column of Table 4. The other columns in the table show the actual operation to be performed with the magnitude of the numbers.

The last column is needed to present a negative zero. The algorithms for addition and subtraction are derived from the table and can be stated as follows the words parentheses should be used for the subtraction algorithm. Table 4. Paper Name: Computer Organization and Architecture When the signs of A and B are same, add the two magnitudes and attach the sign of result is that of A.

When the signs of A and B are not same, compare the magnitudes and subtract the smaller number from the larger. If the two magnitudes are equal, subtract B from A and make the sign of the result will be positive.

This process is best illustrated with a numerical example: 23 Multiplicand 19 x Multiplier Product This process looks at successive bits of the multiplier, least significant bit first. If the multiplier bit is 1, the multiplicand is copied as it is; otherwise, we copy zeros. Now we shift numbers copied down one position to the left from the previous numbers. Finally, the numbers are added and their sum produces the product. Hardware Implementation for signed-magnitude data When multiplication is implemented in a digital computer, we change the process slightly.

Here, instead of providing registers to store and add simultaneously as many binary numbers as there are bits in the multiplier, it is convenient to provide an adder for the summation of only two binary numbers, and successively accumulate the partial products in a register.

Second, instead of shifting the multiplicand to left, the partial product is shifted to the right, which results in leaving the partial product and the multiplicand in the required relative positions. Now, when the corresponding bit of the multiplier is 0, there is no need to add all zeros to the partial product since it will not alter its value. The hardware for multiplication consists of the equipment given in Figure 4.

The multiplier is stored in the register and its sign in Qs. The sequence counter SC is initially set bits in the multiplier. After forming each partial product the counter is decremented. When the content of the counter reaches zero, the product is complete and we stop the process. Therefore, the multiplication M x 14, where M is the multiplicand and 14 the multiplier may be computed as M x 24 - M x That is, the product can be obtained by shifting the binary multiplicand M four times to the left and subtracting M shifted left once.

Booth algorithm needs examination of the multiplier bits and shifting of the partial product. Prior to the shifting, the multiplicand added to the partial product, subtracted from the partial product, or left unchanged by the following rules: 1.

The multiplicand is subtracted from the partial product when we get the first least significant 1 in a string of 1's in the multiplier. The multiplicand is added to the partial product when we get the first Q provided that there was a previous 1 in a string of 0's in the multiplier. The partial product does not change when the multiplier bit is the same as the previous multiplier bit. The algorithm applies to both positive and negative multipliers in 2's complement representation. This is because a negative multiplier ends with a string of l's and the last operation will be a subtraction of the appropriate weight.

The hardware implementation of Booth algorithm requires the register configuration shown in Figure 4. Qn represents the least significant bit of the multiplier in register QR. The flowchart for Booth algorithm is shown in Figure 4. If the two bits are 10, it means that the first 1 in a string of 1's has been encountered. This needs a subtraction of the multiplicand from the partial product in AC. If the two bits are equal to It means that the first 0 in a string of 0's has been encountered.

This needs the addition of the multiplicand to the partial product in AC. When the two bits are equal, the partial product does not change. An overflow cannot occur because the addition and subtraction of the multiplicand follow each other. Hence, the two numbers that are added always have opposite sign, a condition that excludes an overflow. This is an arithmetic shift right ashr operation which shifts AC and QR to the right and leaves the sign bit in AC same The sequence counter decrements and the computational loop is repeated n times.

Note that the multiplier in QR is negative and that the multiplicand in BR is also negative. The bit product appears in AC. Paper Name: Computer Organization and Architecture ashr 1 3. The multiplication of two binary numbers can be done with one micro-operation by using combinational circuit that forms the product bits all at once.

This is a fast way since all it takes is the time for the signals to propagate through the gates that form the multiplication array. However, an array multiplier requires a large number of gates, and so it is not an economical unit for the development of ICs.

Now we see how an array multiplier is implemented with a combinational circuit. Consider the multiplication of two 2-bit numbers as shown in Fig. The multiplicand bits are b1 and b0, the multiplier bits are a1 and a0, and the product is c3 c2 c1 c0. The first partial product is obtained by multiplying a0 by b1b0. The multiplication of two bits gives a 1 if both bits are 1; otherwise, it produces a 0.

As shown in the diagram, the first partial product is formed by means of two AND gates. The second partial product is formed by multiplying a1 by b1b0 and is shifted one position to the left. The two partial products are added with two half-adder HA circuits. Usually, there are more bits in the partial products and it will be necessary to use full-adders to produce the sum. Note that the least significant bit of the product does not have to go through an adder since it is formed by the output of the first AND gate.

A combinational circuit binary multiplier with more bits can be constructed in a similar fashion. A bit of the multiplier is ANDed with each bit of the multiplicand in as many levels as there are bits in the multiplier. The binary output in each level AND gates is added in parallel with the partial product of the previous level to form a new partial product.

The last level produces the product. Let the multiplicand be represented by b3b2b1b0 and the multiplier by a2a1a0.

The logic diagram of the multiplier is shown in Figure 4. Binary division is much simpler than decimal division because here the quotient digits are either 0 or 1 and there is no need to estimate how many times the dividend or partial remainder fits into the divisor. The division process is described in Figure 4.

The divisor B has five bits and the dividend A has ten. Since the 5- bit number is smaller than B, we again repeat the same process. Now the 6-bit number is greater than B, so we place a 1 for the quotient bit in the sixth position above the dividend. Now we shift the divisor once to the right and subtract it from the dividend. The difference is known as a partial remainder because the division could have stopped here to obtain a quotient of 1 and a remainder equal to the partial remainder.

Comparing a partial remainder with the divisor continues the process. If the partial remainder is greater than or equal to the divisor, the quotient bit is equal to 1. The divisor is then shifted right and subtracted from the partial remainder. If the partial remainder is smaller than the divisor, the quotient bit is 0 and no subtraction is needed. The divisor is shifted once to the right in any case. Obviously the result gives both a quotient and a remainder.

Hardware Implementation for Signed-Magnitude Data In hardware implementation for signed-magnitude data in a digital computer, it is convenient to change the process slightly. Instead of shifting the divisor to the right, two dividends, or partial remainders, are shifted to the left, thus leaving the two numbers in the required relative position.

Subtraction is achieved by adding A to the 2's complement of B. End carry gives the information about the relative magnitudes. The hardware required is identical to that of multiplication.

The example is given in Figure 4. The divisor is stored in the B register and the double-length dividend is stored in registers A and Q. The dividend is shifted to the left and the divisor is subtracted by adding its 2's complement value. Paper Name: Computer Organization and Architecture keeps the information about the relative magnitude.

To restore the partial remainder in A the value of B is then added to its previous value. The partial remainder is shifted to the left and the process is repeated again until we get all five quotient-bits.

Note that while the partial remainder is shifted left, the quotient bits are shifted also and after five shifts, the quotient is in Q and A has the final remainder.

Before showing the algorithm in flowchart form, we have to consider the sign of the result and a possible overflow condition. The sign of the quotient is obtained from the signs of the dividend and the divisor. If the two signs are same, the sign of the quotient is plus. If they are not identical, the sign is minus. The sign of the remainder is the same as that of the dividend. Figure 4. In the beginning, the multiplicand is in B and the multiplier in Q. Their corresponding signs are in Bs and Qs respectively.

We compare the signs of both A and Q and set to corresponding sign of the product since a double-length product will be stored in registers A and Q.

Registers A and E are cleared and the sequence counter SC is set to the number of bits of the multiplier. Since an operand must be stored with its sign, one bit of the word will be occupied by the sign and the magnitude will consist of n-1 bits. Now, the low order bit of the multiplier in Qn is tested. If it is 1, the multiplicand B is added to present partial product A , 0 otherwise.

Register EAQ is then shifted once to the right to form the new partial product. The sequence counter is decremented by 1 and its new value checked.

If it is not equal to zero, the process is repeated and a new partial product is formed. A and Q contain the dividend and B has the divisor. The sign of the result is transferred into Q. A constant is set into the sequence counter SC to specify the number of bits in the quotient. As in multiplication, we assume that operands are transferred to registers from a memory unit that has words of n bits. Since an operand must be stored with its sign, one bit of the word will be occupied by the sign and the magnitude will have n-1 bits.

Paper Name: Computer Organization and Architecture We can check a divide-overflow condition by subtracting the divisor B from half of the bits of the dividend stored A. The division of the magnitudes begins by shifting the dividend in AQ to the left with the high-order bit shifted into E.

In this case, B must be subtracted from EA and 1 inserted into Qn for the quotient bit. Since in register A, the high-order bit of the dividend which is in E is missing, its value is EA — 2n In the latter case we leave a 0 in Qn. We repeat this process with register A holding the partial remainder. After n-1 loops, the quotient magnitude is stored in register Q and the remainder is found in register A.

The quotient sign is in Qs and the sign of the remainder is in As. This is because the length of registers is finite and will not hold a number that exceeds the standard length. To see this, let us consider a system that has 5-bit registers. We use one register to hold the divisor and two registers to hold the dividend.

From the example of Figure 4. Paper Name: Computer Organization and Architecture quotient will consist of six bits if the five most significant bits of the dividend constitute a number greater than the divisor.

The quotient is to be stored in a standard 5-bit register, so the overflow bit will require one more flip-flop for storing the sixth bit. This divide-overflow condition must be avoided in normal computer operations because the entire quotient will be too long for transfer into a memory unit that has words of standard length, that is, the same as the length of registers. Provisions to ensure that this condition is detected must be included in either the hardware or the software of the computer, or in a combination of the two.

When the dividend is twice as long as the divisor, we can understand the condition for overflow as follows: A divide-overflow occurs if the high-order half bits of the dividend makes a number greater than or equal to the divisor.

Another problem associated with division is the fact that a division by zero must be avoided. The divide-overflow condition takes care of this condition as well. This occurs because any dividend will be greater than or equal to a divisor, which is equal to zero. Overflow condition is usually detected when a special flip-flop is set. We will call it a divide-overflow flip-flop and label it DVF. The most common way is by a real declaration statement.

High level programming languages must have a provision for handling floating-point arithmetic operations. The operations are generally built in the internal hardware. If no hardware is available, the compiler must be designed with a package of floating-point software subroutine. Although the hardware method is more expensive, it is much more efficient than the software method. Therefore, floating- point hardware is included in most computers and is omitted only in very small ones.

The two parts represent a number generated from multiplying m times a radix r raised to the value of e. Thus m x re The mantissa may be a fraction or an integer. The position of the radix point and the value of the radix r are not included in the registers. For example, assume a fraction representation and a radix The decimal number Paper Name: Computer Organization and Architecture A floating-point number is said to be normalized if the most significant digit of the mantissa in nonzero.

So the mantissa contains the maximum possible number of significant digits. We cannot normalize a zero because it does not have a nonzero digit. Floating-point representation increases the range of numbers for a given register. Consider a computer with bit words. The 48 bits can be used to represent a floating-point number with 36 bits for the mantissa and 12 bits for the exponent.

The largest number that can be accommodated is approximately The mantissa that can accommodated is 35 bits excluding the sign and if considered as an integer it can store a number as large as —1. This is approximately equal to , which is equivalent to a decimal number of 10 digits. Computers with shorter word lengths use two or more words to represent a floating- point number. An 8-bit microcomputer uses four words to represent one floating-point number.

One word of 8 bits are reserved for the exponent and the 24 bits of the other three words are used in the mantissa. Arithmetic operations with floating-point numbers are more complicated than with fixed-point numbers. Their execution also takes longer time and requires more complex hardware.

Adding or subtracting two numbers requires first an alignment of the radix point since the exponent parts must be made equal before adding or subtracting the mantissas. We do this alignment by shifting one mantissa while its exponent is adjusted until it becomes equal to the other exponent.

Consider the sum of the following floating-point numbers:. We can either shift the first number three positions to the left, or shift the second number three positions to the right. When we store the mantissas in registers, shifting to the left causes a loss of most significant digits. Shifting to the right causes a loss of least significant digits.

The second method is preferable because it only reduces the accuracy, while the first method may cause an error. The usual alignment procedure is to shift the mantissa that has the smaller exponent to the right by a number of places equal to the difference between the exponents.

Now, the mantissas can be added. Paper Name: Computer Organization and Architecture. An overflow can be corrected easily by shifting the sum once to the right and incrementing the exponent. When two numbers are subtracted, the result may contain most significant zeros as shown in the following example:. To normalize a number that contains an underflow, we shift the mantissa to the left and decrement the exponent until a nonzero digit appears in the first position.

Here, it is necessary to shift left twice to obtain. In most computers a normalization procedure is performed after each operation to ensure that all results are in a normalized form. Floating-point multiplication and division need not do an alignment of the mantissas. Multiplying the two mantissas and adding the exponents can form the product. Dividing the mantissas and subtracting the exponents perform division.

The operations done with the mantissas are the same as in fixed-point numbers, so the two can share the same registers and circuits. The operations performed with the exponents are compared and incremented for aligning the mantissas , added and subtracted for multiplication and division , and decremented to normalize the result.

A is a fourth representation also, known as a biased exponent. In this representation, the sign bit is removed from beginning to form a separate entity. The bias is a positive number that is added to each exponent as the floating-point number is formed, so that internally all exponents are positive. The following example may clarify this type of representation.

Consider an exponent that ranges from —50 to Internally, it is represented by two digits without a sign by adding to it a bias of This way, the exponents are represented in registers as positive numbers in the range of 00 to Positive exponents in registers have the range of numbers from 99 to The subtraction pf 50 gives the positive values from 49 to 0.

Negative exponents are represented in registers in the range of —1 to — Biased exponents have the advantage that they contain only positive numbers.

Now it becomes simpler to compare their relative magnitude without bothering about their Paper Name: Computer Organization and Architecture signs. Another advantage is that the smallest possible biased exponent contains all zeros.

The floating-point representation of zero is then a zero mantissa and the smallest possible exponent. As a rule, the same registers and adder used for fixed-point arithmetic are used for processing the mantissas. The difference lies in the way the exponents are handled. The register organization for floating-point operations is shown in Fig. Each register is subdivided into two parts. The mantissa part has the same uppercase letter symbols as in fixed-point representation.

The exponent part may use corresponding lower-case letter symbol. Thus the AC has a mantissa whose sign is in As, and a magnitude that is in A. The diagram shows the most significant bit of A, labeled by A1. The bit in his position must be a 1 to normalize the number.

Note that the symbol AC represents the entire register, that is, the concatenation of As, A and a. A parallel-adder adds the two mantissas and loads the sum into A and the carry into E. A separate parallel adder can be used for the exponents. The exponents do not have a district sign bit because they are biased but are represented as a biased positive quantity. It is assumed that the floating-point number are so large that the chance of an exponent overflow is very remote and so the exponent overflow will be neglected.

The exponents are also connected to a magnitude comparator that provides three binary outputs to indicate their relative magnitude.

Paper Name: Computer Organization and Architecture The number in the mantissa will be taken as a fraction, so they binary point is assumed to reside to the left of the magnitude part. Integer representation for floating point causes certain scaling problems during multiplication and division. To avoid these problems, we adopt a fraction representation. The numbers in the registers should initially be normalized. After each arithmetic operation, the result will be normalized.

Thus all floating-point operands are always normalized. The sum or difference is formed in the AC. The algorithm can be divided into four consecutive parts: 1. Check for zeros.

Align the mantissas. Add or subtract the mantissas 4. Normalize the result A floating-point number cannot be normalized, if it is 0. If this number is used for computation, the result may also be zero. Instead of checking for zeros during the normalization process we check for zeros at the beginning and terminate the process if necessary. The alignment of the mantissas must be carried out prior to their operation.

After the mantissas are added or subtracted, the result may be un-normalized. The normalization procedure ensures that the result is normalized before it is transferred to memory. For adding or subtracting two floating-point binary numbers, if BR is equal to zero, the operation is stopped, with the value in the AC being the result.

If neither number is equal it to zero, we proceed to align the mantissas. The magnitude comparator attached to exponents a and b gives three outputs, which show their relative magnitudes. If the two exponents are equal, we go to perform the arithmetic operation. If the exponents are not equal, the mantissa having the smaller exponent is shifted to the right and its exponent incremented.

This process is repeated until two exponents are equal. The addition and subtraction of the two mantissas is similar to the fixed-point addition and subtraction algorithm presented in Fig.

The magnitude part is added or subtracted depends on the operation and the signs of the two mantissas. If an overflow occurs when the magnitudes are added, it is transferred into flip-flop E. The exponent must be Paper Name: Computer Organization and Architecture incremented so that it can maintain the correct number. No underflow may occur in this case this is because the original mantissa that was not shifted during the alignment was already in a normalized position.

If the magnitudes were subtracted, there may be zero or may have an underflow in the result. If the mantissa is equal to zero the entire floating-point number in the AC is cleared to zero.

Otherwise, the mantissa must have at least one bit that is equal to 1. The mantissa has an underflow if the most significant bit in position A1, is 0. In that case, the mantissa is shifted left and the exponent decremented. Figure Addition and Subtraction of floating —point numbers To perform arithmetic operations with decimal data, it is necessary to convert the input decimal numbers to binary, to perform all calculations with binary numbers, and to convert the results into decimal.

This may be an efficient method in applications requiring a large number of calculations and a relatively smaller amount of input and output data.

When the application calls for a large amount of input-output and a relatively smaller number of arithmetic calculations, it becomes convenient to do the internal arithmetic directly with the decimal numbers. Computers that can do decimal arithmetic must store the decimal data in binary coded form. The decimal numbers are then applied to a decimal arithmetic unit, which can execute decimal arithmetic micro- operations. Electronic calculators invariably use an internal decimal arithmetic unit since inputs and outputs are frequent.

There does not seem to be a reason for converting the keyboard input numbers to binary and again converting the displayed results to decimal, this is because this process needs special circuits and also takes a longer time to execute. Many computers have hardware for arithmetic calculations with both binary and decimal data. Users can specify by programmed instructions whether they want the computer to does calculations with binary or decimal data. A decimal arithmetic unit is a digital function that does decimal micro-operations.

It can add or subtract decimal numbers. The unit needs coded decimal numbers and produces results in the same adopted binary code. A single-stage decimal arithmetic unit has of nine binary input variables and five binary output variables, since a minimum of four bits is required to represent each coded decimal digit. Each stage must have four inputs for the addend digit, four inputs for the addend digit, and an input-carry. The outputs need four terminals for the sum digit and one for the output- carry.

Of course, there is a wide range of possible circuit configurations dependent on the code used to represent the decimal digits. Since each input digit does not exceed 9, the output sum Assume that we apply two BCD digits to a 4-bit binary adder. The adder will form the sum in binary and produce a result that may range from 0 to These binary numbers are listed in Table 4. K is the carry and the subscripts under the letter Z represent the weights 8, 4, 2, and 1 that can be assigned to the four its in the BCD code.

The first column in the table lists the binary sums as they appear in the outputs of a 4-bit binary adder. The output sum of two decimal numbers must be represented in BCD and should appear in the form listed in the second column of the table. The problem is to find a simple rule by which the binary column of the table.

The problem is to find a simple rule so that the binary number in the first column can be converted to the correct BCD digit representation of the number in the second column.

It is apparent that when the binary sum is equal to or less than , no conversion is needed. This revised text is spread across fifteen chapters with substantial updates to include the latest developments in the field.

The first eight chapters of the book focuses on the hardware design and computer organization, while the remaining seven chapters introduces the functional Units of digital computer. The pedagogy of the book has been enhanced to enable the learners in assessing their understanding of the key concepts.

Add Comment. Save my name, email, and website in this browser for the next time I comment. Post Comment. Download Computer system Architecture by Mano M.

Morris free ebook in pdf form Computer system Architecture by Mano M. Morris : Author : Mano M. Note :. If you likes to read the soft copy of this book, and you wants to buy hard copy of this book officially from the Publisher. Buy links of this book are given.



0コメント

  • 1000 / 1000