Over the month of January, I’ve achieved some major goals that I would have been happy with in three months. Instead, I’ve been able to get them done in one. I’m really excited about them, but before I share how I got there, I wanted to share what I knocked off the list:

I did all of this while working a standard workweek in a month where we launched a major project, while keeping my caffeine intake to a daily average of ~50mg (less than half of an average cup of black coffee).

So, how’d I pull this off? I certainly didn’t run myself into the ground to do it. I also didn’t set goals (with the exception of going to the dentist). Instead, I made small changes to my daily activity and set up systems to keep myself on track. Here’s what that looks like.

Morning Rituals

I followed the exact same pattern almost every morning of the month, including on weekends. I use Sleep Cycle to track my sleep patterns and wake me up at the optimal time. This is a zero-effort tool that I love. All you do is tell it the latest possible time you’re willing to wake up, flip your phone face-down on your bed, and go to sleep. Sleep Cycle takes care of the rest. Here’s a snapshot of the stats it produces:

The part of those statistics where I wake up at 5:30 isn’t an accident. I’ve been waking up before 6am almost every day of this month (I bump things back to 7am on weekends). I thought this would be a much rougher proposition than it actually is. I’m generally in bed by 11pm, which clocks me 7 hours of sleep each night (more than the American average).

At the beginning of the month, I started off with three days operating on the simplest food I could subsist on. In my case, that was brown rice and the cans of black beans I had accrued in my cupboard but never gotten around to opening. It was an eye-opening experience. It taught me gratitude for the things I have, and it also taught me that there was a lot in my life that was extremely wasteful but I kept around out of habit. After the three days, I started incorporating other foods into my diet, but have not deviated much from those staples. As a result, my breakfast each morning consists of sweet potato, black beans, egg whites, salsa, and tea.

I track all of this in an app called Lifesum, selected because it integrates with Apple’s HealthKit. I focus on tracking my carbohydrate, my protein, my water, and my caffeine intake in HealthKit. Here’s what Lifesum looks like:

That was an easy day to top off on protein - my cousin’s wedding had an oyster bar as part of the reception, and oysters are perhaps my favorite way of getting protein.

Once breakfast is wrapped up, I meditate for ten minutes. This is accomplished by setting my phone’s timer and closing my eyes for ten minutes, pondering on the day ahead or nothing at all. In the past, I’ve also leaned on the Calm and Headspace apps. I liked both of them, but I previously struggled to stick to their prescribed programs and then felt guilty when I slipped up. I also felt guilty if I got distracted in the middle of a meditation session. Going program-less means I think a lot more while meditating, but I also am far calmer and excited to take part in it each day.

After meditation, it’s time for the gym. On an average day, I’m walking to the metro by 6:45am, which puts me at the gym around 7:10. I alternate between the following two workouts:

  • Pullups/pushups. Start with a 10 pullup set and a 20 pushup set, and try to maintain that momentum per set until hitting 50/100. In the earlier parts of this month I scaled down my per-set requirements in line with what my maximum pullups and pushups were. By the end of this month I started my pullup sets off with 25lb additional weight attached to my waist.
  • Squats, 5-rep sets starting from 135lb and incrementing upwards 10lb with each set. On days when the squat racks are full, I do an incline leg press starting from 90lb to warm up and incrementing up 50lb with each set, followed by 3 20-rep sets of 90lb calf raises.

After the gym, it’s on to work. If I’ve got enough time and the temperature isn’t miserable, I’ll walk from Gallery Place to the Watergate. Otherwise it’s back on the subway.

Work Habits

I feel that the most significant habits have come in my style of working this month. I read Tim Ferriss’ The Four-Hour Workweek in the first days of the month after listening to several of his podcasts on my vacation drives. There’s a lot of material in there, and I did not try to implement everything all at once. I did, however, heed his admonition about batch processing during the workday. There is a switching cost each time your brain has to change its focus from one task to another and you lose a lot of the momentum your brain picked up while knocking out the past task. In my line of work, my primary distractions are email, social media, and Slack. To that end, I experimented with checking email at certain times of the day and switching off Slack notifications. That turned out to be a bit too aggressive of a shift. I instead dialed it back to checking personal email once a day and flipping off Slack’s notification system unless my name is directly called out in a message. This has really helped my focus and I’ve plowed through a lot more work as a result.

I also put myself on a media diet, another trick from The Four-Hour Workweek. That means zero news. I’ve been off Twitter completely over the course of this month, deleting the app from my phone and never signing on the desktop version. I haven’t signed into Hacker News once, my primary source of technology news. I’ve picked up some political coverage given the nature of my job, but I’ve had to ask a lot of questions to reporter friends because I just don’t know what’s going on. I only found out who won the NCAA national championship just yesterday. This has been an extremely significant change for me. My head is a lot clearer, and I’m able to focus on code, books or writing in times when I would have normally flipped through news articles or my Twitter feed. I don’t feel the compulsive need to distract myself.

What’s Next?

This month, I organized everything around a focus on physical well-being. The waking up early and going to bed early was a huge step in that direction, as was the working out, the meditation, the lack of drinking, and even things as simple as flossing. (seriously, I have to track that or I’ll never remember to do it) I’m tracking all of these daily habits in an app called Way of Life that looks like this:

I’ll continue that focus on physical well-being in February. I may take on projects here and there, but my focus is on my health. Nothing else before that. I also plan to embark on another series of three days where I eat nothing but rice and beans and spend as little money as possible to focus on what I have in life. I found that experience to be incredibly rewarding (though tough) and definitely something I should do regularly. If you’re reading this, I’d love to hear about how your year started off, and what’s up next for you in February!

A year on the road in 2015

December 26, 2015

It’s 2015. I wake up on a couch in San Francisco after a ridiculous circus party to close out the year before.

I enjoy waking up on couches in San Francisco more than the best bed anywhere else. The sun is different in California, it seems brighter somehow. San Francisco is a foggy grey color half of the time, but when the sun comes out, it makes it so easy to forget the fog ever existed. Take a look for yourself:

Picture of me standing in front of San Francisco Bay

Cities are the waypoints of 2015 in my mind. I remember my mental state by remembering where I was when going through one situation or another. I’m cooking breakfast at Sean and Alex’s pad on Potrero Hill when I receive an email from a recruiter with a proposition to join a media organization in Foggy Bottom to rebuild their content management system. Linda prompts me to dig deeper into this particular recruitment offer, overriding my typical approach of discarding recruiter emails. It takes only a few calls and one in-person interview before I’m convinced, and I move into the Watergate building a few weeks later.

Three days later, I’m on a plane to the Middle East.

Picture of the Burj Khalifa and Dubai skyline

I’ve never been to Dubai or Muscat, nor have I ever taken an international trip with a large group of friends. It’s a dream come true to be with a good crew in the Middle East exploring a culture that is entirely foreign to me. Stealing a sip of Linda’s gold-flecked cappuccino in the Burj al Arab (for real, gold flakes in the coffee); swimming to a giant inflatable castle in the Arabian Sea; riding a camel; watching the Dubai Mall fountain show - none of it could be any better. Linda, Crowley, Medved, Sonal, Nikki, Ravi, Adam, Katy, you’re an extraordinary crew to travel with and I’m looking forward to the next run.

Picture of me on a camel

The spring ramps up. I get to work and kick my engineering education into high gear. I had learned how to skin up a Drupal site on the quick at APCO, but that is nothing compared to what building out National Journal has to teach me about the web. I learn proper recursion while building out an ORM-to-MongoDB-document converter. I learn about all of the ins and outs of the Django administrative console. I learn how to build Python code for production.

Picture of PyCon 2015 attendees

I’m at PyCon in April. There is nothing quite like the feeling of recognizing that you’re in the middle of your tribe. In Montreal, I suddenly find myself in the middle of a huge group of engineers who wanted what I wanted - to produce great and technologically interesting things using Python. And they want it for everyone! And I am in Montreal, using my middling French to chat with the locals, exploring a city that is simultaneously European and North American all in one.

Now I’m back in DC, and the weather is warm. I’m getting to know my journalist co-workers as we play softball together. Every Saturday I haul out to the middle of the Maryland suburbs to wildly miss pitches. Baseball was never my strong suit, but Matt, Alex, Zach, Paloma, Randy, Drew, and the rest of the Atlantic Media crew made it worthwhile every weekend.

Picture of our softball team

It’s June now. The weather’s hot. I’m on the road to New York City for Governor’s Ball, the first festival I’ll do with Linda and our motley crew of friends. Linda and I make a best effort to catch Blake, Ouzy, Jeff, and Kendra in between acts, but eventually Linda and I lose them all to sprint as close as we can for Drake’s set to close the night. We fare better the next day, bringing Blake with us to the front of the stage to watch deadmau5’s stage fail before he hung out on a couch to drink a beer with Left Shark and a giant hot dog. As one does.

Picture of deadmau5 on a couch with Left Shark

July comes and it’s time to hit the road again for an aggressively long roadtrip. DC to Nashville. Nashville to Asheville. Asheville back to DC. Over a thousand miles of open road covered in the Civic. In Nashville, my massive extended family gathers for our almost-triannual family reunion, bringing together some 60-odd Mosbys, Harwoods, and Grimes together for hot chicken contests and beer drinking in cheap honky tonks. I have the good fortune of knowing so many of my family members through a variety of different channels, being the only one to have lived in our three central hubs of Jackson, Nashville, and DC. I consider myself blessed to have had the opportunity to get to know so many of them.

No man ever steps in the same river twice, for it is not the same river and he is not the same man.

My visit to Nashville kicks off a series of meditations that I consider while on the road. I visit my old college campus and realize that the entire center of gravity of the school has shifted, and there are at least five new buildings I don’t recognize. The run down area where I used to buy beer out of a warehouse has sprouted condo buildings. I am no longer an active musician. At some point Nashville moved on after I left, and I moved on too. We are not the same.

Picture of Bolton's spicy chicken and fish

I return to Washington to learn that our owner has decided that National Journal will no longer print our flagship magazine. Policy and politics coverage is tough to do in a weekly publication, and our targeted readership is reading their news online like the rest of the country. The tenor of the newsroom shifts. It’s tough to see some of the good people I played softball with so distraught. Our digital team reassesses certain portions of our web product and we prepare for a September 1 relaunch. I go to Chicago that weekend.

Picture of Chicago skyline

Chicago holds a special place in my heart that is perhaps rivaled only by San Francisco. I have never lived in either place. I know Chicago’s winters are dreadful and that I am supposed to feel differently if I had to spend January there with the wind whipping off the lake. But there is something special about the city. There is a union there between the parts of my heart that will always be from a small town and the parts that crave the excitement of the city.

Picture of the Ohio 200 OK license plate

A few weeks later I take off again for the open road. Columbus, Ohio beckons, where I will give my first talk at a programming conference. I expect to give a retrospective on some of the things we learned while building NationalJournal.com, but we’re still in the middle of building it. A few of the questions stump me, but it is overall a good first talk. I am proud.

Picture of the Folly Beach crew

I lived with the same group of guys for most of my college and grad school experience. We shared dorm suites together, then an apartment, then a house on Belmont Boulevard, then a two bedroom apartment in Brentwood, Tennessee. Two of these guys are now married, with children either in house or on the way. One of them is a Ph.D. We get together and it’s just like we’re undergraduates again, with the same conversations and nerd humor that propelled us through our time at Belmont. We spend a long weekend in Charleston, South Carolina, hanging on the beach, eating tacos, drinking good wine (thanks, Adam!).

I've been in the crib with the phones off, I've been in the house taking no calls...
Drapes closed, I don't know what time it is, I'm still awake, I gotta shine this year

August continues to roll on. When I’m not on the road speaking at conferences or at the beach, I’m in the office pulling 12-15 hour days to ship NationalJournal.com on time. I gain weight after eating nothing but trash for a month and rarely working out. I’m fueled by caffeine only, because I can’t sleep once I get home. We’re doing nightly builds, shipping emergency bug fixes day in and day out, none of the team leaving until the day’s tickets are done. Finally, on August 31, we turn the switch on. NationalJournal.com goes live. Brian, Robert, Paivi, Julia, Ivy, Kim, Lindsey, Jeremy and I finally get a good night’s sleep.

Picture of NationalJournal.com

After shoring up the codebase from the many shortcuts we took to launch on time, I head to Colorado for Jeff and Kendra’s wedding. It’s my first-ever visit to Colorado. I arrive in Denver late Thursday night and pick up a giant 15-passenger van dubbed “The Mystery Machine.” In the morning I pick up the Scooby Squad: MariaElena, Prather, Kyra, Hannah, Clarissa, Levi, Ryan, Juliana and Dalesio. We meet Jeff, Kendra, Sam, Pam, Ethan, and Ellen in what must be God’s country - Cuchara, Colorado. They serve Pueblo green chili with every meal and the entire landscape is flashes of bright yellow, desert green, and a piercing blue hanging in the sky. We hike to the top of the nearest mountain before descending to celebrate the happy couple.

Picture of Dalesio, me, MariaElena and Sam on our hike

October turns the tables at the office again. Only a few short weeks after we relaunched NationalJournal.com, our owner has decided that the news business at a whole needed a shakeup. No more public-facing news and no more ads. Everything will be for the service of our paid members. Many of my friends in the office leave to take jobs at competing news organizations, landing at The New York Times, Vox, The Washington Post, and others. We prepare for the shift to an ad-free world, and I begin to reimagine parts of our code.

For the first time I get to have my entire family with me in Washington. Dad is coming up to run the Marine Corps Marathon and we’re all ready to cheer him on. He’s been working on this for months and has lost a ton of weight. I get to spend the days preceding the marathon showing the city (and its excellent restaurants) off to my family. On Sunday, it’s race day, and Dad knocks out the 26.2 miles like a champion!

Picture of Dad's marathon stats

The downward mental slide that started in September continues in November. The site has a resiliency problem that resists debugging, and I struggle to fight off anger against the thing I’ve made. I turn into a packaged ball of stress, worrying at any moment that the site is going to crash and I’ll need to manually bring it back up. The constantly changing weather brings with it a sickness I can’t shake. The combination of the above brings on a spat of panic attacks that convince me I’m about to stop breathing. I hit a low, hard. I drag myself to work every day feeling like my creativity and my ambition have both fallen through the floor. All I want is out.

And then something happens.

I grow up.

Picture of the magic cup of awesome

On Thanksgiving Day, I drive out to Chestertown, Maryland to spend the day with my extended family. We’re frying turkeys this year - or rather, Uncle David is frying a turkey and I’m drinking Heineken while taking video. I’ve downloaded a few podcasts for the drive to Chestertown, deciding I’ll give this surge in podcasts a shot. I burn through a few really enjoyable tracks, but it’s Tim Ferriss who grabs me. He’s recorded an in-between podcast for the Thanksgiving holiday about mindfulness and gratitude. I listen through it twice. In it, Tim details a few daily things he’s done to change the way he interacts with the world, and I start on them as soon as I get home. I set up a Magic Cup of Awesome and start ripping up scraps of paper to write down the things I’m grateful for in a given day. I strap on a bracelet (read: I steal one of Linda’s hair ties) as a reminder to stop complaining. These positive habits kick off a series of changes in my life. I grow up.

Picture of the Washington monument in December

The weather in DC still can’t make its mind up, even in December. We enjoy 70 degree days. We endure 30 degree days. These happen within the same week. The work continues. I pack the bags up to hit the road again, homeward bound. I stop over in Nashville for the night, spending the evening with Daniel, Helen, Monroe, Adam, Ian, Mary, and Roger.

Monroe is Daniel and Helen’s first son. Roger will be Ian and Mary’s. Adam has a little daughter, but she wasn’t around for our hot chicken fiesta that night. When I left Nashville to move to DC, those couples were starting on a journey of married life. Now they are exploring life with children. They have wonderful lives together and homes in one of America’s great cities, a city which is itself growing by leaps and bounds. I imagine Nashville in 2015 is an exciting place to start a family.

It’s morning. I must hit the road again.

Picture of a puzzle

It’s December 23rd. My brother arrives in Jackson, MS tonight, and we’ll have the entire family together again. For Christmas Eve, we’ll continue a tradition started last year: Lou Malnati’s pizza for dinner, shipped frozen from Chicago and baked here at home. Linda arrives on the 26th for her first visit to the Magnolia State. While planning for her few days here in town, I reimagined the city as a first-time traveler would. What does downtown Jackson look like? What is Fondren? What do the Christmas lights look like in Canton?

Nothing behind me, everything ahead of me, as is ever so on the road. 

I woke up in 2015 running from something. I thought there must be this entire world out there that I was missing out on, and I needed to run to Nashville or San Francisco or DC or New York or London or Paris or Amsterdam or Istanbul to find it. Whatever it was, it was always just beyond my reach. In 2015, somewhere out there on the road, I found it. An optimism, perhaps, or just a calm belief that I’ll find and achieve whatever it is that I’m meant to.

I will wake up in 2016 on a couch in the Florida Keys, closer to Havana or Cancun than my apartment in Washington. Then I’ll start a 1,200 mile dash back to Washington, retracing the same steps I took to return from Charleston. I’ll wave at Jacksonville and Savannah as I go straight up I-95, which stretches on up to Philadelphia, New York, Boston, up, up, up until it crosses into Canada near Houlton, Maine. You split into Canada Highway 2 at that point, which will take you north before changing into Autoroute 20 O, whipping past Quebec City (where there are ice hotels) and into Montreal. From there you can catch a flight to Dublin for $500 or south to Punta Cana if there’s a deal going on. Or just keep driving, on into the west, crossing back over into the United States at Detroit, catching the California Zephyr in Chicago and on to San Francisco. And then a quick flight back to DC to do it all over again.

I wanted to take a step back from operating systems to consider how another programming language views the world. Operating systems in the Unix world require knowledge of C: something that expands my programming mind but often feels like the Python knowledge I already know, just at a lower level. I wanted to pick something totally new, and so I’m going with Erlang.

Erlang is an imperative language: which means once you set a variable, it doesn’t change. That means a statement like i++ or i += 1 won’t work. I assume that Erlang has an elegant way of dealing with this, but I’m very early on in this book. Erlang does this to ensure that functions with the same parameters will always return the same results, regardless of the state elsewhere in the application.

The language makes use of something called an “actor model.” Each “actor” is a separate process in the Erlang virtual machine, calmly waiting for tasks but mostly sitting around doing nothing. Each process can do a very limited set of tasks. Each process is also totally segregated from other processes - without passing very transparent messages back and forth, processes don’t have the ability to communicate. Erlang also appears to come with a standard library that’s almost as fully featured as Python’s: debugging tools, a web server, a database, all the tools I generally rely on to get programming work done.

The Learn You Some Erlang book also makes a point to call out overzealousness in the Erlang community. Erlang caught my eye because of its concurrency and scalability (or rather, people talking about Erlang’s concurrency and scalability), but this book makes a point to calm those expectations a little bit. Just because you can divide everything up into actors doesn’t mean you should. The authors point out that Erlang is fantastic for things like server-level software and is mostly terrible at things like image processing. Exceptions can be made, of course. As I’m interested in things like server-level software and not so interested in image processing, I’m excited to get under the hood.

And with that, let’s get started.

$ brew install erlang
$ erl

I continue on to the next chapter and discover this line:

The Erlang shell has a built-in line editor based on a subset of Emacs, a popular text editor that's been in use since the 70s. If you know Emacs, you should be fine. For the others, you'll do fine anyway.

I am skeptical. I strongly dislike Emacs because every time I try to grok it, I trip over my feet and mostly destroy my entire program. Whatever, onward. Ctrl+A moves me to the beginning of a line, Ctrl+E to the end in Erlang-world. In the current version of Erlang compiled for OSX, it appears that Erlang opens you right into a shell. If I want to get out of it to enter some commands, I hit Ctrl+G. If I can’t remember what I’m supposed to do once I get out of the shell, I hit h. If I want to see a listing of all my current jobs, I hit j… oh, and I bet this is like doing ps -ef, but for the virtual machine. Okay, now I want to get back into my shell… hmm. Struggling to that with the instructions given. I’m going to have to trust that I’ll figure it out as time goes on.

Calling it a day for there. I’ll come back to it in earnest after I’ve digested a few things about the language.

Okay, this will be a short one, but I’ve just set up a minimalistic testing tool using Python, Selenium and PhantomJS and wanted to put it out there. Let’s get cracking.

Start off by installing PhantomJS using Homebrew:

$ brew install phantomjs

And then move on to add Selenium into the mix (assuming still that you’re using Python):

$ pip install selenium

And now let’s build our first test case. I’m working as part of a Django app so I’ll be using the Django TestCase object, but this will work very similarly with Python’s core unittest.TestCase.

import os

from django.test import LiveServerTestCase
from selenium import webdriver

def setUpClass(cls):
	cls.driver = webdriver.PhantomJS()

def tearDownClass(cls):

That’s all it takes to get started. Honestly. Just install PhantomJS through Homebrew and Selenium through pip, and you’re ready to get testing. Django’s LiveServerTestCase needs a DJANGO_LIVE_TEST_SERVER_ADDRESS, so I’ve set one of those up as an environment variable. The setUpClass() and tearDownClass() methods are part of the LiveServerTestCase class, and are used to initialize behavior when the tests begin to run or to close things no longer needed after the test is completed. In this situation, we’re using setUpClass() to set up our PhantomJS instance that we’ll subsequently close in tearDownClass().

With my particular test situation, I needed to execute some JavaScript on each page I planned to test. That JavaScript would return a variable, set to either 1 or 0 based on the state of an object on the page. Here’s how we’ll extend our code to do that:

import os

from django.test import LiveServerTestCase
from selenium import webdriver

def setUpClass(cls):
	cls.driver = webdriver.PhantomJS()

def tearDownClass(cls):

def test_one_or_zero_on_link_one(self):
	should_be_zero = self.driver.execute_script('return case.a')
	self.assertEqual(should_be_zero, 0)

def test_one_or_zero_on_link_two(self):
	should_be_one = self.driver.execute_script('return case.a')
	self.assertEqual(should_be_one, 1)

I’ve now added two official tests to the mix. I’m asserting that on {DOMAIN}/linkone, there should be a JavaScript object called case with a property a that’s set to zero. On {DOMAIN}/linktwo, that property should be set to one. In these two tests, I open the links up in PhantomJS (something you’ll never see, by design). I then execute JavaScript on the pages and pick up the return values with self.driver.execute_script(). This allows me to tap some of the context of the page state through JavaScript.

My tests did not need anything much more complex than that, but I’m looking forward to using this in several other contexts in future tests. PhantomJS is wicked fast compared to running the same code using Selenium’s Firefox or Chrome drivers. And being able to plug it into on-page JavaScript (maybe even tag-teaming with QUnit) makes it that much more appealing.

The xv6 filesystem

October 27, 2015

The file system of an operating system is one of those things I haven’t thought about in years, ever since I went through the painful process of reformatting USB sticks when different versions of Windows used different file systems. But I’ve never really thought about how those things have to be implemented.

On xv6, files are either data files, which are “uninterpreted byte arrays,” and directories, which are references to data files or other directories. So a directory is nothing more than a special file that points to other files. Cool. Directories form a tree starting at the root, and a path is a set of directories going out from the root. If a path doesn’t start with a root, paths start from the current working directory of a given process.

That means chdir() is a system call that changes the current working directory of a process, and mkdir() is the system call that creates a new directory. mknod() is a similar call that creates a kernel device (such as a keyboard). When a process opens the kernel device file, the kernel will redirect the read() and write() to the kernel device instead of to a specific data file.

The fstat() call gives information about a specific object that a file descriptor points to. fstat(fd) takes in a file descriptor and returns a C struct called stat, which has an implementation that looks something like this:

#define T_DIR 1 // directory
#define T_FILE 2 // file
#define T_DEV 3 // device

struct stat {
	short type; // type of file
	int dev; // file system's disk device
	uint ino; // inode number
	short nlink; // number of links to file
	uint size; // size of file in bytes

The name of the file is not included in this information table, because names are distinct from the actual file. An “inode” is the only unique identifier of a file that the kernel sees: names are just “links” back to an individual inode. Thus the following system calls will create two names for a file with the same inode:

open("a", O_CREATE|O_WRONLY);
link("a", "b");

Calling fstat() on either of the file descriptors attached to “a” or “b” will demonstrate that they’re tied back to the same file with the same inode number. unlink() will do the opposite of link(). It deletes the name, but will not delete the file itself.

In xv6, these types of operations are implemented as user-level programs, rather than baking the system calls into the shell itself. mkdir is not a direct call to the system, but a user program that calls the mkdir() system call. Same with rm. cd is a notable exception, because it changes the working directory of the shell itself rather than a child process of the shell.

And that’s it for the xv6 system calls! I can treat the xv6 kernel as something abstracted away now that I have all of the system calls for interacting with it.

Pipes and such

October 26, 2015

In MIT’s text on the xv6, pipes are described as “a small kernel buffer exposed to process as a pair of file descriptors, one for reading and one for writing.” I think I understand file descriptors by now, but I don’t quite know what a buffer is. Let me dig into that first with some Googling.

Wikipedia describes a buffer as “a region of a physical memory storage used to temporarily store data while it is being moved from one place to another.” Cross-referencing this with other things I’ve previously learned about operating systems makes me think that buffers are probably also used for things like keyboard input - where we don’t want to necessarily take the time for expensive writes to disk, so we just store in memory and wait until another program clears things out. Okay. Moving on.

This example code was provided by the text:

int p[2];
char *argv[2];

argv[0] = "wc";
argv[1] = 0;


if(fork() == 0) {
	exec("/bin/wc", argv);
} else {
	write(p[1], "hello world\n", 12);

And this compiles nicely with some tweaks for the OSX environment - but, as it never prints, we don’t have a visual confirmation.

Pipes and temporary files share some similarities in execution. This code would look the same way to the end user:

$ echo hello world | wc
$ echo hello world > /tmp/xyz; wc </tmp/xyz

But there are three key differences here. This would leave the /tmp/xyz file lying around, which we’d have to come back through and clean up later. It also expects there to be enough free disk space for the /tmp/xyz file, which could be extremely long. And finally, these processes could not easily send data back and forth with this approach, if wc needed to send data back to echo.

And that’s it for pipes. Next, on to the filesystem!

Taking a break from operating systems to do some algorithm training: something I revisit about once a year. I have the Khan Academy bit on the Towers of Hanoi up, but I want to first pause to go look at Quake’s “fast inverse square root” algorithm.

An inverse square root is used to calculate vectors for lighting and reflecting, so it’s incredibly useful for video games and rendering. Rendering uses millions upon billions of these tiny calculations, so a speedup over floating-point division can make a game much, much faster. Quake sped it up with the following steps:

1. Take a floating point number n.
2. Shift the bits of n to treat n like an integer.
3. Shift n right one bit to make longword w.
4. Subtract w from the magic number 0x5f3759df (this step made a Quake developer comment "what the f***" in the code)
5. This number is within a few percentage points of the final answer. 
6. One iteration with the Newton-Raphson method to come to the final answer.

I was amused by this approach and can only imagine the total confusion on the part of the developer when the algorithm worked.

We had yet another instance of nginx allowing requests to spin into infinity, and now I’m starting to get a little frustrated with it. I’m going to take a different tack and see if I can sniff out the point when these requests start to blow up. To do this, I’m going to use the awk tool to find any and all requests that take longer than 3000 milliseconds, which is well beyond the tolerance point for my application. Most of our requests average out in the 100-150ms range, with long running requests taking 500ms. Let’s dump some things out from the log files.

cat django-www.log | awk '$33 > 3000 {print NR-1 ": " $0;}' > high.log

So what this little snippet will do is cat the entire log file out, then pipe the output through an awk command. The syntax of awk breaks the file up based on delimiters (I think space is the default), which you can then access by number. $33, in this case, is the 33rd character, which is our millisecond mark. If it’s greater than 3000, I print the line number, the line itself, then dump all that out to a file called high.log.


I don’t know why this stuck out to me, but I gave the address space usage and the rss usage of my uwsgi threads a second look this time. Here’s what they look like under normal traffic for a small sample.

{address space usage: 3510480896 bytes/3347MB} {rss usage: 84557824 bytes/80MB} [pid: 5542|app: 0|req: 394/98579] 
{address space usage: 3510046720 bytes/3347MB} {rss usage: 101765120 bytes/97MB} [pid: 5875|app: 0|req: 387/98580] 
{address space usage: 3510497280 bytes/3347MB} {rss usage: 106389504 bytes/101MB} [pid: 6922|app: 0|req: 70/98581] 
{address space usage: 3510497280 bytes/3347MB} {rss usage: 106500096 bytes/101MB} [pid: 6922|app: 0|req: 71/98582] 
{address space usage: 3521171456 bytes/3358MB} {rss usage: 135237632 bytes/128MB} [pid: 4706|app: 0|req: 660/98583] 
{address space usage: 3510046720 bytes/3347MB} {rss usage: 101765120 bytes/97MB} [pid: 5875|app: 0|req: 388/98584] 
{address space usage: 3510480896 bytes/3347MB} {rss usage: 84557824 bytes/80MB} [pid: 5542|app: 0|req: 395/98585] 

The address space usage hovers at about 3.3GB, but the rss usage averages out around 100MB under normal traffic. Here’s what it looks like when we spike:

{address space usage: 3518722048 bytes/3355MB} {rss usage: 148234240 bytes/141MB} [pid: 28562|app: 0|req: 4128/36509]
{address space usage: 3537321984 bytes/3373MB} {rss usage: 153063424 bytes/145MB} [pid: 28827|app: 0|req: 4137/36512]
{address space usage: 3518103552 bytes/3355MB} {rss usage: 125833216 bytes/120MB}
{address space usage: 3537321984 bytes/3373MB} {rss usage: 153255936 bytes/146MB} [pid: 28827|app: 0|req: 4138/36517]
{address space usage: 3518722048 bytes/3355MB} {rss usage: 148992000 bytes/142MB} [pid: 28562|app: 0|req: 4137/36523]
{address space usage: 3518722048 bytes/3355MB} {rss usage: 148992000 bytes/142MB} [pid: 28562|app: 0|req: 4137/36525]

Our address space usage is the same, but our rss usage is pushing over 142MB for almost every request. I want to dig more into this.

I stumbled upon this discussion thread on the uwsgi issues page from someone experiencing the same sort of performance degradation with almost the same sort of configuration that we have: uwsgi, supervisord, Django. The solution suggested here is to add die-on-term=True to our uwsgi config, but I want to look into that a little more before I just start adding things to our uwsgi config. (the issue thread is here)

The issue is distilled here (second bullet). Before uWSGI 2.1, sending the SIGTERM signal to uwsgi means “brutally reload the stack”, which is not convention. SIGINT or SIGQUIT has the same behavior in uWSGI that SIGTERM has in other applications. Searching for supervisord SIGTERM yielded this StackOverflow answer:

supervisord will emit a SIGTERM signal when a stop is requested. Your child can very probably catch and process this signal (the stopsignal configuration can change the signal sent).


But my child CAN’T catch and process that signal. In fact, it actually totally ignores it. It trips over it and brutally reloads the stack if I’m running uwsgi prior to 2.1.

$ uwsgi --version

So to fix this bug, we either need to upgrade uwsgi, or send the die-on-term option, which will correct this behavior. Adding the die-on-term directive is the quicker and less potentially problematic version. This will go in our uwsgi.ini file:

... stuff ...
die-on-term=True # yay
... stuff ...

And now we’ll reload and give it a shot.

I’m exploring MIT’s course on Operating System Engineering in the hopes to get a better feel for how Linux works under the hood, and it’s already answered a few questions about the things I was seeing in ps earlier. Though the OS course deals with a Unix variant (not Linux), the low-level architecture is similar enough that I think I can assume Linux operates the same way.

xv6, the Unix variant used in this course, has a kernel that interfaces with hardware and user-level programs. This creates the notion of “user space” and “kernel space,” with a single process jumping back and forth between the two to complete its work. When the process needs to use one of the services from the kernel, it invokes a system call - a specific function in the kernel.

Unix provides these services (and many more, but this appears to be all xv6 offers):

  1. fork(), which creates a process
  2. exit(), which terminates the current process
  3. wait(), which waits until a child process exits
  4. kill(pid), which kills a process with the given PID
  5. getpid(), return current process’s PID
  6. sleep(n), sleep for n seconds
  7. exec(filename, *argv), load a file and execute it with the given arguments
  8. sbrk(n), increase process memory by n bytes
  9. open(filename, flags), open a file with read or write flags
  10. read(fd, buf, n), read n bytes from an open file into a buffer buf
  11. write(fd, buf, n), write n bytes from a buffer buf into an open file
  12. close(fd), release open file fd
  13. dup(fd), duplicate fd
  14. pipe(p), create a pipe and return fd’s in p
  15. chdir(dirname), change the current directory
  16. mkdir(dirname), create a new directory
  17. mknod(name, major, minor), create a device file
  18. fstat(fd), return info about an open file
  19. link(f1, f2), create another name (f2) for the file f1
  20. unlink(filename), remove a file

The shell uses each of the system calls to do its work. It’s just a run-of-the-mill program like anything else.

The xv6 textbook provides the following mini-example of a program using fork to do its work. I want to give it a shot running on my own machine. Here’s how the proram is written in the textbook:

int pid;

pid = fork();

if(pid > 0){

printf("parent: child=%d\n", pid);

pid = wait();

printf("child %d is done\n", pid);

} else if(pid == 0){

printf("child: exiting\n");


} else {

printf("fork error\n");


And here’s how I had to tweak it to get it running on my Mac:

#include <unistd.h>
#include <stdlib.h>

int main(int argc, char **argv) {
	int pid;

	pid = fork();
	if(pid > 0) {
        printf("parent: child=%d\n", pid);
        pid = wait(0);
        printf("child %d is done\n", pid);
	} else if(pid == 0) {
        printf("child: exiting\n");
	} else {
	    printf("fork error\n");

	return 0;

The stdio.h library makes sense, because that’s what contains the printf() functions to dump things out to the console. I received an error when I tried to use fork() without unistd.h, so I’m assuming fork() sits there, and I received more errors when I tried to use wait() and exit() without stdlib.h. fork() contains the same memory contents of the parent process - in the parent process, fork() will return the child’s PID, but in the child, it will return 0 - so we see the if statements trigger for the child and for the parent.

exec(), by contrast, doesn’t return to the parent process. It replaces the calling process with a new process stored somewhere in the file system. Let’s take a look at the textbook’s pseudocode:

char *argv[3];

argv[0] = "echo";
argv[1] = "hello";
argv[2] = 0;
exec("/bin/echo", argv);
printf("exec error\n");

Try as I might, I couldn’t get this program into a usable shape where it would run. After two nights banging my head against it, I decided to scrap it - mostly because I get the gist of what it’s going for. Here was where I finished up:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main() {
    char args[3];
    args[0] = "echo";
    args[1] = "hello";
    args[2] = "0";
    exec("/bin/echo", args);
    printf("exec error\n");

I think that the probably may ultimately have something to do with the differing implementations of a Mac OS (where I’m testing this) versus a true Linux platform. At any rate, this helps me understand what my shell is doing! It’s doing a combo of these two programs to execute any program I type into the shell. It first takes my command from the command line, then forks the command line process, calls my command using exec, waits for the command to finish, then returns control using wait(). And that’s why certain processes don’t immediately jump back over to shell control (things like vi for example), because the forked process hasn’t given control back until I close vi.

Ooh, and this also helps systemd make more sense. I suspect that systemd takes control of PID 1 and then immediately loops through anything in the /etc/systemd/system/ folder to start launching services based on the configuration files there, forking processes as it goes.

Next up on our list of functionality is I/O, where we start with file descriptors. According to the text, a file descriptor is an integer tied to some kernel-managed file object. User-level programs deal with the file object through the read() and write() system calls. The read(fd, buf, n) call takes in a file descriptor integer and reads n bytes into buffer buf. Subsequent calls to read() will start where n stopped. If we called read(12, buf, 512), we would read 512 bytes from fd 12 into buffer buf on the first read. If we called it again, we would start at byte #512 then read another 512 bytes (ending at 1024). write(fd, buf, n) follows a similar pattern, writing n bytes from buffer buf to file descriptor fd.

Here is the pseudocode given for a simple cat-like program:

char buf[512];
int n;

for(;;) {
	// with file descriptor 0, read 512 bytes from file
	n = read(0, buf, sizeof buf);

	if(n == 0) {
		// if we've reached the end

	if(n < 0) {
		// if n less than 0, error
		fprintf(2, "read error");

	if(write(1, buf, n) != n) {
		// write to output object 1, log if error
		fprintf(2, "write error");

So I read from fd 0 to a buffer, then write from that buffer to an output object. That could be a terminal output:

cat myfile.txt

or piped to a command:

cat myfile.txt | awk --something something--

or written to another file:

cat myfile.txt > myfile2.txt

cat doesn’t have to care, it’s reading and writing to any descriptor that’s given to it.

Rounding out the file descriptor system calls is dup(). dup() takes in a file descriptor integer and returns another file descriptor that points to the same file.

The file descriptor mechanism is way more powerful than I originally gave it credit for. It’s a simple little trick, but it’s brilliant - because things can write to a “file”, even if it’s not a file at all. The user processes can treat them all the same way.

Okay, I’m going to put a stop to things there before moving on to pipes!

I woke up this morning to yet another fun little situation from my application. As opposed to the prior set of errors, where response times were creeping up into the uncomfortably high range, this is exactly the opposite: the application stops responding to anything. Here’s what that graph looks like in New Relic:

At 7:11AM, my application simply dies, according to New Relic. Into the breach!

I think that I likely only need about a hundred thousand lines of my logs to get back to the timeframe in question, so let’s cull those out and locate anything happening in the 7:10-7:19 timeframe:

$ tail -100000 django-www.log > outage.log
$ grep -m 1 -n "07:1" django-www.log

2825:{address space usage: 3589083136 bytes/3422MB} {rss usage: 318144512 bytes/303MB} [pid: 8510|app: 0|req: 4874/239423] [IP ADDRESS] () {50 vars in 1552 bytes} [Mon Oct 19 13:07:10 2015] GET /petro/s/90050/hillary-clinton-won-wont-always-be-this-way => generated 42453 bytes in 302 msecs (HTTP/1.1 200) 3 headers in 250 bytes (2 switches on core 17)

Whoops, too far. Let me slice this up a little bit more and back it up a few minutes.

$ tail -30000 outage.log | grep -m 1 -n "Oct 20 07:10"

13411:{address space usage: 3518570496 bytes/3355MB} {rss usage: 126271488 bytes/120MB} [pid: 11089|app: 0|req: 4972/319841] [IP ADDRESS] () {44 vars in 751 bytes} [Tue Oct 20 07:05:01 2015] GET /2012/04/the-land-of-hope-and-dreams.php => generated 58612 bytes in 41 msecs (HTTP/1.1 404) 2 headers in 95 bytes (2 switches on core 230)

Okay, so I’m going to slice out the last 16,589 lines, which will start me close to the beginning of the outage:

$ tail -16589 django-www.log > outage.log
$ sed -n 1,3500p outage.log > temp.log && mv temp.log outage.log

Let’s get started. I’m going to look for the point where my app started throwing 500 errors.

$ grep -m 20 -n "HTTP/1.1 500" outage.log

That prints out the first 20 lines of my application throwing 500 errors, and a lot of them are on things that I know shouldn’t 500. I’m going to dig into my Sentry error logging to see what the problem is, and…crap.

At some point, I or one of my teammates accidentally removed the Sentry and Raven configuration from our Django app, so we haven’t actually captured any of these errors. Awesome. This might be a dead end for now. I’m going to add the configuration lines back into the app:

    ... other apps ...

import raven
    'dsn': 'RAVEN KEY HERE',

That’s a frustrating dead end, but it’s something we should have caught. I’m adding a simple test case into our test suite to make sure it doesn’t happen again.

from django.test import TestCase
from django.core.management import call_command

class TestRavenUp(TestCase):

	def test_raven_up(self):
		stdout_backup = sys.stdout
		sys.stdout = open('/tmp/raven_up', 'w')
		call_command('raven', 'test')

		f = open('/tmp/raven_up', 'r')
		config = f.read()

		sys.stdout = stdout_backup
		self.assertTrue('[MUH SERVER NAME]' in config)
		self.assertTrue('[MUH APPLICATION KEY]' in config)

		print('Inspect [MUH SENTRY SERVER] to note that event has been logged.')


That will spit out a test event to Sentry if we have everything configured appropriately. Hopefully I’ll have an update with error logs the next time it happens (but I hope it doesn’t happen again).