- Common Loki Misconfigurations
- Iterating Through a List in Ink
- Debugging Misconfigured Container Networks
- Minimum Viable EC2 in Terraform
- Storylets in Ink
- Interactive Fiction Tooling Overview
- In-Place Resizing for Digitalocean Droplets
- Unity Demonstrates the Importance of FOSS
- Target Labels in Prometheus
- My View of AI is the Same
- Verify DNS Ownership with TXT Records
- Sane Droplet Defaults
- Editing Made Easy with Vim
- Gatsby Gotchas
- Concatinating Default AWS Tags in Terraform
- Easily Updating the Default Github Branch
- Lifetimes in Rust
- Checking for Bad Links
- Maybe TypeScript and React is Bad
- Static Asset Management in React
- Bundler Down Time
- Using React Context for Localization
- JS Implementation of a Sticky Footer
- Custom Aliases
- Trying Out the 7drl Challenge
- Trash Opinions
- Building Your First Program in Rust
- Fixing mongod reports errors related to opening a socket
- Improving Open Source Maintenance
- Technical Interviewing Tips
- Housekeeping Note
- Dynamic Programming Basics
- The Oddity of Naming Conventions in Programming Languages
- An Experiment Using Machine Learning, Part 3
- Debugging with grep
- An Experiment Using Machine Learning, Part 2
- An Experiment Using Machine Learning, Part 1
- The Value of while
- National Day of Civic Hacking
- OpenAI and the Future of Humanity
- Creating a Whiteboard App in Django
- Creating Meaningful, Organized Information
- Towards A Critique of Social Media Feeds
- Setting up Routes in Django
- Developing a Messaging Component for Code for SF
- Dream Stream 2.0
- Keyed Collections in Javascript: Maps and Sets
- Blog Soft Relaunch
- Scraping with Puppeteer
- Looking Ahead to Dream Stream 2.0
- Solving West of Loathing's Soupstock Lode Puzzle
- Installing Ubuntu
- Interview with David Jickling Evaluation
- Compare Text Evaluation
- Dream Stream Evaluation
Dream Stream 2.0
Dream Stream 2.0 was a complete rework of both the front-end and back-end of the site. There were three goals I had in mind: to adopt recent web design practices such as using grid containers for layouts and a flattened aesthetic, to integrate and handle additional data sources, and to have the Javascript written in a contemporary style.
The first version of Dream Stream presented data in a table with ten rows. If there were more than ten entries in the JSON package it wouldn’t render that. By contrast Dream Stream 2.0 writes a string to JSON on the server that the client renders to display all relevant entries. Also they are displayed as cards inside of a CSS grid container instead of rows in a table.
The interactive elements of Dream Stream 2.0 feature buttons that have a more flat design that we associate with the contemporary web.
The majority of the work for Dream Stream 2.0 was on the back-end. The introduction of the qualifying points system to the professional tournament scene was an opportunity to show website visitors skill levels along a different axis. However there were several challenges with the implementation.
The first step was collecting the data. Valve set up a procircuit route on the dota2.com site, but there wasn’t an API available to interact with so that meant I needed to scrape the data. I ended up using Puppeteer, Google’s headless browser. Although Puppeteer worked great as a scraping tool, it did end up having some unintended consequences when I moved into a production environment. Puppeteer requires a build pack to run on Heroku, and all of its dependencies makes pushing updates significantly slower, enough so that you could go off to make a cup of coffee while you were waiting. Although I don’t have any intentions to switch from Puppeteer, it did mean that I had to be a lot more careful about pushing updates because quickly fixing a mistake could be a 10 minute long process where previously it would have been less than a minute.
The greatest challenge though was managing two disparate data sets, and combining them into a single JSON source the client could fetch. The issue was on the one hand I had JSON data that I had retrieved from the Dota 2 leaderboards, and on the other hand I had the JSON data I had written from the scraping tool. Since there would be some overlap entries where players might appear in both JSON packages how should I go about handling that data?
My solution was to create a Map object called playerMap
that supplemented an array I was using to sort the data. When the map was created I stored the player name as the key, and the index position in the array as the value of the map. By doing this I was able to write a function that checked the scraped JSON data, and if the player’s name was already in the Map, it knew where to update the array using the Map’s value. If the player’s name wasn’t in the Map then it could simply be pushed to the array. Once this was done the players could be sorted along either their leaderboard rank, or their qualifying points. This work taught me the importance of finding the right data structure for the job.
Two other minor points that are worth mentioning. As I rewrote the code I started writing everything using ES2015 syntax, and am now perfectly comfortable writing that way. By far my favorite feature of ES2015 is string interpolation. It is a much easier way to create a string using variables rather than the concatenation methods of the past! Additionally, I also eliminated all use of jQuery, and everything is written in “pure” Javascript. While there’s nothing wrong with jQuery, Javascript has become a robust enough language on its own that jQuery often feels like a redundant library to include.
Yesterday Valve announced changes to the tournament structure for next year, so Dream Stream will have to change yet again. Qualifying points are now associated with teams instead of players. I am not sure at this point if I even want to include qualifying points anymore since it is arguably less meaningful as a metric for an individual player. In addition to changing the association of the points, Valve eased up on the rules regarding player transfers, so it is possible there will be much more roster swapping in the coming year. Regardless of what I decide to do, at a minimum the scrape tool as it is written will cease to work, so I will need to make some adjustments in the near future.