The point of dynamic programming is efficiency: improving runtime by discerning and solving subproblems of algorithmic repetition; that is, where and how does an algorithm repeat itself? This is especially relevant to recursive algorithms, where the computer is often having to do the same computation in multiple recursive branches. The classical example is the Fibonnaci sequence; below we have the standard recursive function and the memoized version side by side.
While JavaScript notoriously has difficulty with large numbers, we see at the rather small handleable-by-JS input of 25 a performance difference: 2-3ms for the non-memoized version and a blazing 0ms for the memoized version!
Two things to note: this kind of memoization is called “top-down” — for some it simply means what we mean by “memoization” — and a key functionality in JavaScript making it possible here is what is called “pass by reference”; that is, all objects are passed to functions by reference, meaning in every recursive call (two calls per one call) the same memo object is read and modified, making it a register of sorts we can use to see if we’ve already performed a certain computation. As stated earlier, this is the key to dynamic programming: improving algorithmic time complexity by omitting redundant steps.
]]>Binary is awesome!
]]>Along these lines, I will here discuss how I see O(n!).
O(n!) says that an algorithm’s work will increase with relation to input size n at a rate of current iteration volume * prior iteration volume.
Think of it like a loop where the number of iterations of the loop equals n!, i.e., an input size of n will mean n! iterations.
As we know from high school math, n! means 1 * 2 * 3 … * n; this means we know something about any given input size’s neighboring iteration volumes: any sample input n1 of n has two predictable neighbor iteration volumes, ((n1-1)! * n1) and ((n1! * (n1 + 1)).
This is how I see rate of change as being central to the concept of algorithmic complexity represented by big O: reliably predicting the the rate at which an algorithm’s complexity increases or decreases means, given an input size of n, we will know n’s neighboring operation volumes, making any specific increases or decreases to n irrelevant to the complexity, which is why we drop constants.
Next installment on O(log n) coming soon!
]]>So how did we begin? After initializing a git repo and waffle workflow manager, my team began with an ERD: Entity Relationship Diagram. This helped us walk through our database schema design at a high level prior to writing any actual code. We found we would need (at least) 4 models: Users, Items, Orders, and Reviews. Relationships? Users have many Reviews; Orders belong to many Items; Items have many Reviews and belong to many Orders; Reviews belong to many Items and belong to many Users (*phew*). We found this required a join table: one between Orders and Items, and this relationship proved to be tricky on the backend.
We are using Sequelize, and in order to set the many-to-many associations we had to promisify the setter; however, Sequelize has poor support for this kind of action and does not allow duplicates! So we are currently stuck only able to accept a quantity of 1 per order per item. As it is technically not blocking our MVP, and we have only a week to have a deployable site, we have moved on, leaving that for a (theoretical) circle-back at a later date. In a “real world” scenario, we would have consulted with the product team to confirm MVP requirements, and if it was required we would have had to reconfigure our models, but as it is, we have plenty of other cod to catch.
Another not-a-bug-but-a-feature came up yesterday as I created our cart logic. To do so, I utilized Passport.js (only local strategy for now, OAuth is on the docket for this coming week) and cookie-session (saves session in a client cookie as opposed to backend session store). Passport serializes the user as they login, i.e., places them on the req object for easy access, and cookie-session creates a session and sends the client a cookie in its first response. When the user adds an item to their cart, the server creates a cart array on the session, adds the item to it, and sends it back to the client. The cart then lives on the client side and can be edited by the user at will, whether they are logged in or not. So what’s the problem? Well, if more than one user uses the same browser to shop on our site (I’m looking at you, families), and one of them leaves their cart, even if they log out, the next user will inherit that cart information! Again, similar to the earlier issue, this is non-blocking in terms of MVP but it is something we would definitely need to fix for a production application. (Easiest fix would be to persist the cart in a new model and wipe the cart on user logout, but this would only work for logged-in users.)
Both of these bugs were awesome learning experiences in digging into online conversations on Github and SO. Next week, I’ll tackle OAuth, deploy the site to Heroku, and lastly move on to my hackathon project, a browser-based synthesizer app. Stay tuned!
]]>As I make my way through senior phase here at Fullstack, I will be pausing along the way to go a little deeper into facets of each of these amazing technologies. Stay tuned!
]]>And what is the internet? If I bracket ontological analysis — one that would begin by asking the implicitly prior question of what I mean when I ask for “what” something “is” — I am tempted to answer: 1s and 0s. Binary code.
Similar to my prior post, I’ve recently had another “A-HA” insight regarding CS fundamentals, this time involving the binary encoding of data values in JavaScript. Reading through MDN’s JavaScript guide, I came across bitwise operators. While it was not the first time I had come across these operators, it was (as with so much in programming) the first time I was able to think about it clearly enough to understand how little I understood it. Cue deep dive into binary.
Given a 32-bit value space, since each bit can only be one of two values (On or Off, 0 or 1), this value space can be used as a base-2 counting system to construct, if unsigned, all integers from 0 to 2^32 (0 to 4294967296), and if signed, all integers from -2^31 to +2^31 (-2147483648 to +2147483648). If you start counting, a pattern emerges:
2^5 | 2^4 | 2^3 | 2^2 | 2^1 | |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
2 |
0 |
0 |
0 |
1 |
0 |
3 |
0 |
0 |
0 |
1 |
1 |
4 |
0 |
0 |
1 |
0 |
0 |
5 |
0 |
0 |
1 |
0 |
1 |
6 |
0 |
0 |
1 |
1 |
0 |
7 |
0 |
0 |
1 |
1 |
1 |
8 |
0 |
1 |
0 |
0 |
0 |
9 |
0 |
1 |
0 |
0 |
1 |
10 |
0 |
1 |
0 |
1 |
0 |
11 |
0 |
1 |
0 |
1 |
1 |
12 |
0 |
1 |
1 |
0 |
0 |
13 |
0 |
1 |
1 |
0 |
1 |
14 |
0 |
1 |
1 |
1 |
0 |
15 |
0 |
1 |
1 |
1 |
1 |
16 |
1 |
0 |
0 |
0 |
0 |
Notice the rightmost digit: it is always either a one or a zero, and it regularly alternates with each count, i.e., you know that given non-negative integer x, if its binary representation ends in 1, x+1’s binary representation will end with 0; likewise will x-1’s. Same with the inverse case; if x’s binary representation ends with 0, x+1’s and x-1’s will both end with 1. This gets complicated in a signed environment, but the point here is to notice the logarithmic distribution of the 0s and 1s.
Moving to the second digit from the right, notice it now oscillates at a frequency of *2 the prior digit; that is, instead of alternating between 0 and 1 each count, in the second column, we get 2 zeros, 2 ones, 2 zeros, 2 ones. In the third column, 4 zeros, 4 ones, 4 zeros, 4 ones. Thus the column number (counting from right to left and starting at 1) actually corresponds to the log value applied to 2 which generates the frequency of 0s and 1s in the given column.
In column 4, then, a “cycle” is 2^4 “long,” meaning there will be (2^4) / 2 zeros then (2^4) / 2 ones. This means, for any given integer x within the limits permitted by the 32-bit binary scheme we are working with, you can zoom in on its binary representation by first determining where it’s leftmost 1 is — the breakpoint 2^n value where, if subtracted from x, leaves a remainder which is less than that of the 2^n subtrahend. You can then determine the subsequent digits by checking if the remainder of the subtraction is greater than or less than the next base-2 value; if it is less, it’s a 0, if it is greater, it’s a 1. Either way, the remainder is carried over to the next 2^n until 2^1 is reached. At that point you would have a binary representation of your given integer.
This insight helped me start thinking outside the base-10 box, and again, always exciting to feel the power of knowledge. Until next time!
]]>Not being a CS major, my way into code and the ways of programming has been iterative and, well, circular. From what I’ve gathered, this is not only not unusual it is instead “the way it is.” To this point, in the introduction to EJ, Haverbeke presents a primitive program that finds the sum of the numbers from 1 to 10. In my own words, this is how the program works:
Again, not being a CS major – an English major with a Master’s degree, in fact! – wrapping my mind around this very simple program was a bit of an “A-HA” moment. If there is one thing I can say I absolutely enjoy about programming is that feeling of progress; the insight is real, and things you learn make it possible to learn more things in an almost exponential way. Seeing the results of learning to code means feeling the power of programming – until next time!
]]>