Web scraping with Julia
21 Dec 2018
One of my grad school procrastination projects was learning how to brew beer. I started off using a website called Hopville to keep track of the recipes I brewed, until they were acquired in 2013 by Brewtoad. Both sites provided a really convenient way to play around with recipe ideas, learn from others, and keep track of how each step of each brew went which was really helpful as a beginner.
Now, just five years later, Brewtoad is shutting down.1 With no way to easily grab an archive of the dozens of recipes and brew logs I've saved on the site, and no public API.2 So, the only remaining option is to go through and download the HTML for each page, one-by-one. I could do that myself but I
don't have time for thatthink that's a task more appropriate for a computer. So I wrote a Julia script to scrape a user's recipes and brew logs.
Cite your code!
12 Dec 2018
TL;DR: cite the software you use in your research!
In lab meeting the other day someone asked what the major R packages for analyzing psycholinguistic data are, and I had a hard time thinking of any.1 That made me think about why software is such a small part of our scholarly output. Part of the reason might simply be that there's not enough overlap in the specific kinds of analyses we do to justify creating brand new packages, rather than using domain-general tools (like the tidyverse).
But I think there may be a deeper explanation: it's hard to write good, useful software, and academia does not reward that particular kind of hard work.
17 Oct 2018
Why a blog? Because why not.
Like the rest of this site, it's powered by Hugo, a static site generator powered by Go templates. The content of each page is written in markdown, and rendered into HTML by a series of hand-crafted templates and CSS.