3.1 Number Systems and Computer Arithmetic

2 min readjuly 25, 2024

Number systems form the foundation of digital computing. , , and systems each play crucial roles in representing and manipulating data. Understanding these systems is key to grasping how computers process information at the most basic level.

Converting between number systems and performing arithmetic operations in binary and hexadecimal are essential skills. These operations underpin computer functionality, but they also reveal limitations in computer arithmetic that can impact scientific computing and numerical analysis.

Number Systems

Decimal, binary, and hexadecimal systems

Top images from around the web for Decimal, binary, and hexadecimal systems
Top images from around the web for Decimal, binary, and hexadecimal systems
  • Decimal system uses digits 0-9 with positional notation based on powers of 10 (100, 101, 102...)
  • Binary system uses digits 0 and 1 with positional notation based on powers of 2 (20, 21, 22...) underpins computer operations
  • Hexadecimal system uses digits 0-9 and letters A-F with positional notation based on powers of 16 (160, 161, 162...) compactly represents binary data

Conversion between number systems

  • Decimal to binary conversion divides by 2 repeatedly, recording remainders
  • multiplies each digit by its corresponding power of 2 and sums results
  • Decimal to hexadecimal conversion divides by 16 repeatedly, using remainders for hex digits
  • Hexadecimal to decimal conversion multiplies each digit by its corresponding power of 16 and sums results
  • Binary to hexadecimal conversion groups bits in sets of four
  • Hexadecimal to binary conversion expands each hex digit to four binary digits

Arithmetic in binary and hexadecimal

  • Binary follows carry and sum rules, detects overflow when carry extends beyond available bits
  • Binary employs borrow and difference rules or uses method
  • Binary uses shift and add algorithm or Booth's algorithm for signed numbers
  • Binary applies restoring or non-restoring algorithms
  • Hexadecimal addition and subtraction perform digit-by-digit operations with carry/borrow
  • Hexadecimal multiplication and division often convert to decimal, perform operation, then convert back to hexadecimal

Limitations of computer arithmetic

  • Finite precision representation uses fixed-point or floating-point numbers (IEEE 754 standard)
  • Rounding errors occur through chopping or rounding to nearest, propagate through calculations
  • Overflow and underflow arise when numbers exceed representable range, impact scientific computations
  • denotes smallest representable number greater than zero
  • Catastrophic cancellation results in loss of significance during floating-point subtraction
  • Ill-conditioned problems exhibit high sensitivity to small input changes
  • Mitigation strategies include higher precision arithmetic, stable algorithms, and interval arithmetic for error bounds

Key Terms to Review (19)

Addition: Addition is a fundamental arithmetic operation that combines two or more numbers to obtain their total or sum. This process is essential in various number systems, including binary and decimal, as it lays the groundwork for more complex calculations in computer arithmetic. The way addition works can vary based on the number system being used, impacting how computers perform arithmetic operations at a basic level.
Associativity: Associativity is a property of some binary operations that states the way in which operands are grouped does not affect the result of the operation. This means that for an operation * to be associative, the equation (a * b) * c must yield the same result as a * (b * c) for any operands a, b, and c. Understanding associativity is crucial for effective computation and can impact how calculations are structured, especially in number systems and arithmetic.
Base conversion: Base conversion is the process of changing a number from one numeral system to another. This is essential in computer arithmetic and number systems, as it allows for the representation of numbers in different bases, such as binary, decimal, and hexadecimal, which are fundamental to computing and digital systems.
Binary: Binary is a number system that uses only two digits, 0 and 1, to represent values. This system is fundamental to computer science and digital electronics because it aligns perfectly with the on/off states of electrical signals in computers. The binary system allows for efficient data processing and storage, as it simplifies the complex operations required in computing.
Binary to decimal conversion: Binary to decimal conversion is the process of translating a number expressed in binary form (base-2) into its equivalent value in decimal form (base-10). This is important because computers use binary systems for data processing, while humans typically use decimal systems for daily calculations. Understanding this conversion allows for effective communication between digital systems and human users.
Commutativity: Commutativity is a fundamental property in mathematics stating that the order of operands does not affect the result of an operation. In the context of arithmetic, it means that for any two numbers, say 'a' and 'b', the equation 'a + b = b + a' holds true for addition, and 'a * b = b * a' holds for multiplication. This principle is critical when understanding number systems and how operations are performed in computer arithmetic.
Decimal: A decimal is a numerical system that uses base 10, employing digits from 0 to 9 to represent values. This system is the most widely used number system for everyday counting and calculations, allowing for both whole numbers and fractions through the use of a decimal point to separate the integer part from the fractional part. Decimals are crucial in scientific computing as they provide a way to represent real numbers accurately in calculations and data representation.
Division: Division is a fundamental arithmetic operation that involves splitting a quantity into equal parts or groups. In computing, division can be more complex due to the representation of numbers in different number systems and how computers handle arithmetic operations. It's essential to understand how division interacts with both binary and decimal systems, especially regarding precision and potential errors in computation.
Floating-point representation: Floating-point representation is a method of encoding real numbers in a way that can support a wide range of values by using a fixed number of digits, allowing for both very large and very small numbers. This representation is crucial in scientific computing as it enables calculations involving decimal values while also introducing challenges like precision and rounding errors. The way numbers are represented directly influences the errors that arise in computations, which is essential to understand for anyone working with numerical methods.
Hexadecimal: Hexadecimal is a base-16 number system that uses sixteen symbols to represent values: the digits 0-9 and the letters A-F, where A represents 10, B represents 11, C represents 12, D represents 13, E represents 14, and F represents 15. This system is widely used in computing and digital electronics because it can represent large binary numbers in a more compact and readable format. Hexadecimal is particularly useful for simplifying the representation of memory addresses and color codes in programming.
Integer overflow: Integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside the range that can be represented within a given number of bits. This often leads to unexpected results, as the value wraps around to the opposite end of the number range, which can cause serious issues in calculations and algorithms. Understanding integer overflow is crucial in computer arithmetic, as it highlights limitations of number systems used in computing.
Machine Epsilon: Machine epsilon is the smallest positive number that, when added to one, results in a value distinguishably greater than one in a computer's floating-point arithmetic. This concept is crucial for understanding numerical precision and the limitations of computer calculations, as it directly relates to how errors can arise in scientific computing due to the finite representation of numbers. Recognizing machine epsilon helps identify the sources of errors that can occur when performing arithmetic operations with floating-point numbers.
Multiplication: Multiplication is a mathematical operation that represents the process of combining equal groups to find the total number of items. This operation is fundamental in various contexts, particularly in number systems and computer arithmetic, where it serves as a building block for more complex calculations and algorithms, enabling efficient computation and data representation.
Rounding Error: Rounding error refers to the discrepancy between the exact mathematical value and its approximation due to rounding during numerical computations. This often occurs in digital systems where numbers are represented in a finite format, leading to inaccuracies that can compound through calculations. Understanding rounding error is crucial for evaluating the precision and reliability of numerical methods, especially in computer arithmetic and floating-point representations.
Sign-magnitude representation: Sign-magnitude representation is a method used in computing to represent both positive and negative integers by using one bit for the sign and the remaining bits for the magnitude of the number. In this system, a '0' indicates a positive number while a '1' denotes a negative number, which allows for simple representation of negative values. This format is important for computer arithmetic as it helps facilitate the understanding of how numbers are stored and manipulated within digital systems.
Significant digits: Significant digits, also known as significant figures, are the digits in a number that carry meaningful information about its precision. This concept is crucial in scientific computing, as it determines how much uncertainty is associated with numerical values when performing calculations or measurements. Understanding significant digits helps to maintain the integrity of data and ensures that results are reported accurately in relation to the precision of the inputs used.
Subtraction: Subtraction is a mathematical operation that represents the process of taking one quantity away from another. In the context of number systems and computer arithmetic, subtraction is crucial for performing calculations and manipulating numerical data. It plays a significant role in various algorithms, error detection, and even in representing negative numbers in different number systems.
Truncation Error: Truncation error refers to the difference between the true value of a mathematical operation and its approximation when a finite number of terms or steps are used. This type of error arises in numerical methods when an infinite process is approximated by a finite one, impacting the accuracy of solutions to differential equations, numerical differentiation, and other computations.
Two's complement: Two's complement is a mathematical representation of integers in binary form that allows for the inclusion of both positive and negative numbers. This system simplifies arithmetic operations, particularly addition and subtraction, by using a single representation for zero and leveraging the properties of binary addition to handle negative values effectively.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.