First off, good news: I’ve officially been accepted into Fullstack Academy and will be joining the next immersive cohort. I’m extremely excited for this opportunity to become a software engineer under the tutelage of such a fresh and cutting-edge group of developers. I’m fully committed to a lifetime of building and continually improving the information superhighway I’ve come to take for granted — the internet.
And what is the internet? If I bracket ontological analysis — one that would begin by asking the implicitly prior question of what I mean when I ask for “what” something “is” — I am tempted to answer: 1s and 0s. Binary code.
Similar to my prior post, I’ve recently had another “A-HA” insight regarding CS fundamentals, this time involving the binary encoding of data values in JavaScript. Reading through MDN’s JavaScript guide, I came across bitwise operators. While it was not the first time I had come across these operators, it was (as with so much in programming) the first time I was able to think about it clearly enough to understand how little I understood it. Cue deep dive into binary.
Given a 32-bit value space, since each bit can only be one of two values (On or Off, 0 or 1), this value space can be used as a base-2 counting system to construct, if unsigned, all integers from 0 to 2^32 (0 to 4294967296), and if signed, all integers from -2^31 to +2^31 (-2147483648 to +2147483648). If you start counting, a pattern emerges:
|
2^5 |
2^4 |
2^3 |
2^2 |
2^1 |
0
|
0
|
0
|
0
|
0
|
0
|
1
|
0
|
0
|
0
|
0
|
1
|
2
|
0
|
0
|
0
|
1
|
0
|
3
|
0
|
0
|
0
|
1
|
1
|
4
|
0
|
0
|
1
|
0
|
0
|
5
|
0
|
0
|
1
|
0
|
1
|
6
|
0
|
0
|
1
|
1
|
0
|
7
|
0
|
0
|
1
|
1
|
1
|
8
|
0
|
1
|
0
|
0
|
0
|
9
|
0
|
1
|
0
|
0
|
1
|
10
|
0
|
1
|
0
|
1
|
0
|
11
|
0
|
1
|
0
|
1
|
1
|
12
|
0
|
1
|
1
|
0
|
0
|
13
|
0
|
1
|
1
|
0
|
1
|
14
|
0
|
1
|
1
|
1
|
0
|
15
|
0
|
1
|
1
|
1
|
1
|
16
|
1
|
0
|
0
|
0
|
0
|
Notice the rightmost digit: it is always either a one or a zero, and it regularly alternates with each count, i.e., you know that given non-negative integer x, if its binary representation ends in 1, x+1’s binary representation will end with 0; likewise will x-1’s. Same with the inverse case; if x’s binary representation ends with 0, x+1’s and x-1’s will both end with 1. This gets complicated in a signed environment, but the point here is to notice the logarithmic distribution of the 0s and 1s.
Moving to the second digit from the right, notice it now oscillates at a frequency of *2 the prior digit; that is, instead of alternating between 0 and 1 each count, in the second column, we get 2 zeros, 2 ones, 2 zeros, 2 ones. In the third column, 4 zeros, 4 ones, 4 zeros, 4 ones. Thus the column number (counting from right to left and starting at 1) actually corresponds to the log value applied to 2 which generates the frequency of 0s and 1s in the given column.
In column 4, then, a “cycle” is 2^4 “long,” meaning there will be (2^4) / 2 zeros then (2^4) / 2 ones. This means, for any given integer x within the limits permitted by the 32-bit binary scheme we are working with, you can zoom in on its binary representation by first determining where it’s leftmost 1 is — the breakpoint 2^n value where, if subtracted from x, leaves a remainder which is less than that of the 2^n subtrahend. You can then determine the subsequent digits by checking if the remainder of the subtraction is greater than or less than the next base-2 value; if it is less, it’s a 0, if it is greater, it’s a 1. Either way, the remainder is carried over to the next 2^n until 2^1 is reached. At that point you would have a binary representation of your given integer.
This insight helped me start thinking outside the base-10 box, and again, always exciting to feel the power of knowledge. Until next time!