Binary Calculator
Convert between binary, decimal, hexadecimal, and octal. Perform binary arithmetic with step-by-step solutions and multiple bit-length displays.
Quick Answer
Binary uses base-2 (0 and 1). To convert decimal to binary, repeatedly divide by 2 and read remainders bottom-up. For example, 42 in binary is 101010 (32+8+2). Hexadecimal uses base-16 (0-9, A-F) and octal uses base-8 (0-7).
Base Converter
Enter a number in any base and see it converted to all other bases instantly.
Conversions
Binary Representations
Bit Position Values
| Position | 5 | 4 | 3 | 2 | 1 | 0 |
|---|---|---|---|---|---|---|
| Power | 2⁵ | 2⁴ | 2³ | 2² | 2¹ | 2⁰ |
| Value | 32 | 16 | 8 | 4 | 2 | 1 |
| Bit | 1 | 0 | 1 | 0 | 1 | 0 |
32 + 8 + 2 = 42
About This Tool
The Binary Calculator converts numbers between binary (base-2), decimal (base-10), hexadecimal (base-16), and octal (base-8) number systems. It also performs binary arithmetic operations with step-by-step explanations. This tool is essential for computer science students, programmers, and anyone working with low-level data representation.
Understanding Number Bases
A number base (or radix) defines how many unique digits are used. Decimal uses 10 digits (0-9), binary uses 2 (0 and 1), octal uses 8 (0-7), and hexadecimal uses 16 (0-9 and A-F). Each position in a number represents a power of the base. In decimal, 42 means 4×10¹ + 2×10⁰ = 42. In binary, 101010 means 1×32 + 0×16 + 1×8 + 0×4 + 1×2 + 0×1 = 42.
Why Binary Matters in Computing
Computers use binary because digital circuits have two states: on (1) and off (0). Every piece of data in a computer, from text to images to programs, is ultimately represented as sequences of 0s and 1s. Understanding binary is fundamental to computer science, networking (IP addresses), file permissions (Unix chmod), and color codes. A single binary digit is called a bit, and 8 bits make a byte.
Hexadecimal in Practice
Hexadecimal is widely used in computing because each hex digit represents exactly 4 binary bits (a nibble). This makes it a compact way to express binary values. Memory addresses, color codes (#FF5733), MAC addresses, and many debugging tools use hex notation. Converting between hex and binary is straightforward: each hex digit maps to a 4-bit binary pattern (A = 1010, F = 1111, etc.).
Octal Usage
Octal was historically important in early computing when systems used word sizes that were multiples of 3 bits. Today, it is primarily used in Unix/Linux file permissions (chmod 755), where each octal digit represents 3 permission bits (read, write, execute). While less common than hexadecimal in modern computing, understanding octal is still valuable for systems administration and computer science education.
Frequently Asked Questions
How do I convert decimal to binary by hand?
What's the difference between 8-bit, 16-bit, and 32-bit?
Why is hexadecimal used for colors?
How does binary addition work?
Can negative numbers be represented in binary?
You might also like
Was this tool helpful?