Dynamic programming is one of those things that is deceptively simple to the novice but ridiculously obvious to the well-trained, meaning what it solves for requires an understanding of execution context and runtime that is neither merely theoretical nor merely practical. Like so much in computer science, knowing what it is is different from knowing how to do it, which are both altogether different from being able to do it.
The point of dynamic programming is efficiency: improving runtime by discerning and solving subproblems of algorithmic repetition; that is, where and how does an algorithm repeat itself? This is especially relevant to recursive algorithms, where the computer is often having to do the same computation in multiple recursive branches. The classical example is the Fibonnaci sequence; below we have the standard recursive function and the memoized version side by side.