- Common Loki Misconfigurations
- Iterating Through a List in Ink
- Debugging Misconfigured Container Networks
- Minimum Viable EC2 in Terraform
- Storylets in Ink
- Interactive Fiction Tooling Overview
- In-Place Resizing for Digitalocean Droplets
- Unity Demonstrates the Importance of FOSS
- Target Labels in Prometheus
- My View of AI is the Same
- Verify DNS Ownership with TXT Records
- Sane Droplet Defaults
- Editing Made Easy with Vim
- Gatsby Gotchas
- Concatinating Default AWS Tags in Terraform
- Easily Updating the Default Github Branch
- Lifetimes in Rust
- Checking for Bad Links
- Maybe TypeScript and React is Bad
- Static Asset Management in React
- Bundler Down Time
- Using React Context for Localization
- JS Implementation of a Sticky Footer
- Custom Aliases
- Trying Out the 7drl Challenge
- Trash Opinions
- Building Your First Program in Rust
- Fixing mongod reports errors related to opening a socket
- Improving Open Source Maintenance
- Technical Interviewing Tips
- Housekeeping Note
- Dynamic Programming Basics
- The Oddity of Naming Conventions in Programming Languages
- An Experiment Using Machine Learning, Part 3
- Debugging with grep
- An Experiment Using Machine Learning, Part 2
- An Experiment Using Machine Learning, Part 1
- The Value of while
- National Day of Civic Hacking
- OpenAI and the Future of Humanity
- Creating a Whiteboard App in Django
- Creating Meaningful, Organized Information
- Towards A Critique of Social Media Feeds
- Setting up Routes in Django
- Developing a Messaging Component for Code for SF
- Dream Stream 2.0
- Keyed Collections in Javascript: Maps and Sets
- Blog Soft Relaunch
- Scraping with Puppeteer
- Looking Ahead to Dream Stream 2.0
- Solving West of Loathing's Soupstock Lode Puzzle
- Installing Ubuntu
- Interview with David Jickling Evaluation
- Compare Text Evaluation
- Dream Stream Evaluation
Dynamic Programming Basics
Dynamic programming is confusing for a lot of people. Even the name is strange, and it’s by design. It was coined by Richard Bellman (the same Richard Bellman of the Bellman-Ford algorithm) as a deliberate term of obfuscation to hide the fact that what he was up to at the RAND corporation at the time was mathematical research.
So dynamic programming is kind of a meaningless term, but what does it actually refer to? Essentially, it is a technique for solving optimization problems that can be broken down into simpler sub-problems, and then solved recursively.
The simplest example of dynamic programming is designing a function that solves the Fibonacci sequence. Lets recall what a simple recursive implementation looks like.
def fib(n):
if n <= 1:
return 1
else:
return fib(n - 1) + fib(n - 2)
One thing that is unfortunate about this type of recursive technique is that you will evaluate a lot of statements you have already solved as you work your way through the recursive formula. It would be better if our algorithm could remember these results so that as we worked through the problem we could skip statements that have already been encountered. Luckily we have a data structure that can do that for us, the hash table.
def fib(n):
memo = {0:0, 1:1}
if n not in memo:
memo[n] = fib(n - 1) + fib(n - 2)
return memo[n]
This process of storing our results in a hash table is called memoization, and it is useful because hash table look ups take O(1) time. Note there is also another dynamic programming technique called the bottom-up approach that is sometimes used instead of memoization. I’m not going to go into it here, but just know that it exists and is an equally valid approach to solving problems. Usually either approach works, but for certain problems one might be a little simpler to understand and implement than the other.
Now that you’ve seen this memoized Fibonacci function, you basically understand everything there is to know about dynamic programming. Of course, most problems that involve dynamic programming are more complicated than calculating Fibonacci numbers, but the steps towards solving the problem remain the same. Every dynamic programming problem will have the following elements:
-
Reducing a problem into smaller sub-problems. You want to start by solving the simplest possible sub-problem, and that will be your base case to end the recursion.
-
A recursive solution for solving sub-problems.
-
Memoization.
Another way to think about it is that dynamic programming is a slightly more elegant/efficient version of a brute force solution. If you are trying to solve a problem where you necessarily have to consider every possibility among a list of possibilities, you are most likely dealing with a dynamic programming problem, and with any luck using these techniques will give you a solution that is in polynomial time.
There are two things that make dynamic programming problems difficult to solve.
The first is figuring out what the sub-problem is you need to solve. Unfortunately there isn’t any magic technique for figuring this out. You just need to think about the problem you are dealing with. I recently was working on a problem that asked the following: given a number of cents (n), output the number of combinations of pennies, nickels, dimes, and quarters that are possible. Figuring out an elegant recursive technique for solving this is definitely not obvious! I’ll give a hint though: for quarters the recursion is (q * 12) + ((q - 1) * 12 ) + ((q - 2) * 12), etc. where q is the number of quarters. Can you figure out what’s special about the number 12?
The second tricky thing about dynamic programming problems is they often involve splicing. You may have an array of numbers, and then need some specific combination of numbers from the array. One thing I’ve found when you are doing splicing for these types of problems is that off by one errors are very easy to make. Splicing lends itself to these types of errors since the first parameter is inclusive of the integer position, but the second parameter is not. It’s a good thing to keep in mind while double checking your work.