Saturday, September 18, 2010

New city, new bike

A few facts. In May, I moved to Seattle. Since May, I have used my bicycle twice. As of this fall, I will be using a bike pretty much every day. Read on for an explanation regarding the last two facts.

Why didn't I ever use my bike during the sunny season in Seattle? Well, I had to commute to Microsoft from Capitol Hill. This is a very inconvenient task regardless of the weather for a number of reasons. There is no direct cycling-only option to get from the north half of Seattle to the SR-520 corridor. The options were as follows:
  1. Bike to the Montlake Flyer Stop, a final eastbound west-side stop for congested bus routes such as the ST 545. From there, ride the bus across the bridge, disembark at the first stop and ride to Microsoft for 20 minutes along the SR-520 corridor multiuse trail. This is a huge pain because you must wait for an empty bike rack on a bus, which could take half an hour or more after 8am.
  2. Bike all the way up the Burke-Gilman trail and then on roads through Kirkland to Redmond. This adds at least 45 minutes to the commute.
  3. Bike south to I-90 and use the bike lane attached to the interstate. This is the nicest way to cross the lake, but it is quite out of the way unless you live in or south of Central District. You also have to deal with the Bellevue's traffic and its general abundance of jerks with cars while going north to Redmond. This detour adds a minimum of 30 minutes to the commute each way.
All of these were way too much hassle for myself, especially considering my tendency to keep long hours at my internship. As I've been too tired/busy to go out on long rides on the weekend, the racing bike has sat on the porch most of the summer. My daily commute was a 20-45 minute one-way exercise in sitting my rear on a nominally soft Sound Transit coach seat.

Now that the internship is done, I feel at liberty to realistically consider commuting by bike to UW* every day, including rainy and dreary days. The main considerations are the different weather conditions (rain, mist, fog, slog, and Seattle's other precipitation variants), a significant hill climb when going back home, and suitability for commuting.

The aforementioned "racing bike" is a Felt Racing 2008 Z80 (sized 54cm, a tad too big for me). I inherited it from Steph ever since she started her romance with her fixed-gear bike. Since it is explicitly a "racing" bike, it does not have clearance for tires much larger than 700x24, and fitting a rack is out of the question. The frame geometry is a bit compact, which isn't great for touring or commuting. It does well on hills with three chainrings (50/39/30) and a Shimano 9-speed cassette (12-25T), but the Shimano Tiagra STI shifters are really not my style. The derailleurs/cogs frequently get confused for no apparent reason, and perform very poorly with chain tension (say, going uphill and gearing down is somewhat risky).

On advice from several other bikers, I started my search for a commuter bike with the Surly Cross Check. It's a steel-frame bike designed for maximum flexibility, and commonly employed as a commuter, cyclocross, touring, and "whatever" bike. I was initially skeptical that it would be much different from the Felt bike, but good geometry and steel can make a world of difference.

Over the past two weeks, I tested a 50cm Cross Check (at REI Seattle), a 52cm Surly Long Haul Trucker, 50cm Aurora, and finally, a 52cm Cross Check. I'm planning to go back and buy it on Monday from Counterbalance U Village, and then take it to FreeRange Cycles in Fremont for some adjustments. The most important adjustment will be the addition of a 3rd chainring. This will enable me to get up hills.. something that is not really possible with the 2 chainrings (36, 48) x 9 (12-25) that come standard on a built up Cross Check.

The next part in getting commute ready will be to shell out for fenders, a rack, and panniers. I'm not completely settled on the brand of panniers that I want, but at the least they have to be waterproof and able to fit a 15" macbook pro inside a case :) I will update with pictures once the goods are purchased come Monday.

Tuesday, June 8, 2010

PLDI 2010: Day 0 and 1

As I write this, I'm at the Fairmont Royal York hotel in downtown Toronto, the location of the PLDI 2010 conference. Our paper from Purdue ("An analysis of the dynamic behavior of JavaScript programs") was the first paper of the opening session of the conference. Though I am not a presenter this year, I felt that it was important to come to this conference for networking and to see what's going on elsewhere.

So far, Toronto has been good. I found some very cheap lodging at the University of Toronto. Apparently they operate their dorms as short or long-term hostels during the summer, and I was able to stay at $37 CAD a night. Compared to the conference hotel (a 4 star hotel), it's a great steal and has allowed me to rationalize the exorbitant price of dining in Toronto. Transit in Toronto is much more institutionalized than in Seattle, so the subways are often packed in the morning and streetcars go between the most important subway stations and offer free transfers to the subway. Though I have gotten lost a few times in the subway, it is much more convenient than being lost on a bus system or on foot.

I arrived here on Sunday via a non-stop Air Canada flight from Seattle. Transport from the Toronto Pearson airport is a simple connector bus and 15 subway stops, for a grand total of $3 CAD. The price of food has a great deal to do with its proximity to 4-star hotels; A half-dozen blocks from the conference hotel, I was able to get decent coffee for $1.50. Within the hotel, coffee is over $3 and a simple Heineken at the hotel bar will run you a sweet $8 CAD.

Downtown Toronto also has a sprawling network of underground shopping plazas and walkways. According to the marketing copy, it is the largest underground network in the world. Personally the city seems like Montreal, but much more British than French (opposite of Montreal). Montreal also has a substantial covered/underground tunnel system, but I never used the subway there. The French influence in Montreal is almost like the Chinese and Vietnamese influence present outside of the downtown core of Toronto. On Sunday I walked through Chinatown, and it is at least 30 blocks in size. It makes the International District in Seattle seem quaint and cozy by comparison.

Later I'll write about the actual research presentation program. Now, I will be off to lunch.

Tuesday, April 27, 2010

Covered bicycle parking at Purdue University's West Lafayette campus

Purdue has very little biking culture. Thus, the poor state of affairs when it comes to bike lanes and bike racks is not surprising. Bike lanes deserve their own post entirely, because there are so many ways that they could be improved.

It seems that the vast, vast majority of bikes on campus were bought during freshman year by mommy and daddy at WalMart, and have never been serviced since. Rusted-out bikes are common, and the relevant groundskeeping/police people only remove abandoned bikes twice per year (at the end of Autumn and Spring semesters). Also, there does not seem to be any coherent approach to bike racks: some are long racks, some are upside-down U's cemented into the ground, and there are a few truly eccentric bike racks in the older parts of campus.

By far the biggest gripe that I had was the lack of covered bike parking, especially outside of residence halls. For those poor souls in Owen, Tarkington, and other dorms without elevators, it is not even possible to bring your bike into your tiny room. Even in halls with elevators, there is no place to securely store your bike besides your room. Without covered parking, bikes both cheap and expensive will quickly deteriorate and become unusable without a new chain and other parts. Since replacing such parts on a Walmart bike is not usually possible, these bikes are abandoned, taking valuable space in heavily used locations (like dining courts, lecture halls, and dorms).

In my four years at Purdue, I have only come to discover a few places where one can reliably park their bike at a bike rack with shelter from the elements. If anyone knows of more, please let me know and I'll add it to this list. I do not spend much time in dorm-land or Engineering parts of campus, so it is likely that I have forgotten a few places.

  1. Beneath the elevated building spanning Wetherill and Brown. There are at least four bike racks, but they can be crowded at times. Similar to the Math building breezeway, rain/snow can fairly easily blow through and still get bikes wet.
  2. In the Hawkins Hall underground parking ramp. Just to the right upon going down the entrance, there are two bike racks that are a decent distance away from the outside of the garage. This place seems popular with old commuters (saw lots of bikes with 2+ panniers).
  3. There is some marginal covered parking for bikes in front of Krannert. There is an overhang about 8-10 feet up in the air and some load-bearing pillars. Amongst this are some bike racks. A bike parked there would probably get wet with much wind, though. Another minus is it's proximity to the campus bars; left for too many nights, it would be a likely victim of drunken destruction.
It would be an easy fix to add more covered parking near campus: simply remove a few parking spaces in each of the parking garages, and add modern bike racks. While this would remove about a thousand dollars a year in A parking revenue, I'm sure that the cost of removing and disposing of hundreds of bike frames and kicked-in wheels. Simple awnings are inexpensive, and can be used at several existing large bike rack areas without making the landscape substantially uglier. I'd much rather see awnings than rusted, abandoned bikes.

Wednesday, April 21, 2010

Slowly improving my ergonomic-ness

One of my coworkers in the lab recently decided to start back up on his piano-playing. Naturally, I was jealous (why did I never learn to play piano?). However, within a week he came into the lab bearing wrist braces and a grimace. Between learning Chopin and coding for 8+ hours a day on a 13" MacBook, all of the finger work finally did him in. He was unable to bend his wrists at all while wearing the braces, so he had to buy an ergonomic keyboard that doesn't require bending of the wrists.

I have always been slightly curious about exotic keyboard layouts and ergonomic ways of working, starting with my learning of Dvorak layout last year and continuing when I saw several people at UW with fancy Kinesis keyboards. Since that visit I have been debating whether or not I need to worry about ergonomics, and if I do, what to do to alleviate this worrying.

For starters, I have tried various different ways of using my MacBook. Despite being the midsize model (15"), it is very hard on your wrists to use the cramped keyboard for extended periods of time. This is compounded by the abundance of chairs and tables on Purdue's campus that make good posture difficult or impossible. The first experiment was to try sitting up straight. This only is comfortable in a small number of chairs on campus, so my tried an entirely different approach to typing posture: standing up while typing. This is also hard on campus because most tables are quite short. This has only worked at home, where my dinner table is designed for high chairs.

The next step, one which I am currently in, is experimenting with keyboard and monitor height. My other main gripe about laptops (besides cramped keyboards) is that they force you to look downwards at the screen. This makes it very difficult to maintain a good posture while typing, since your neck is bent forward and head downward. My first attempt at fixing this is to buy a somewhat cheap ergonomic keyboard (the bog-standard, entry-level Microsoft ergo keyboard). With this, I can adjust the height of they keyboard and the laptop screen independently. It will take me a few days to get used to using an external keyboard again, as I never use a mouse these days.

I'm wondering what the long-term, optimal configuration will be. This is important to consider before I spend a lot of money on other equipment, and before I start graduate school and set up my student office. Does the split keyboard/screen necessitate having a desktop for long-term work and a laptop for short-term work? Is it worth it to have ergonomic setups at both work and home? It's fully possible to drive two external monitors with my MacBook Pro's video card, but all of the setup and takedown is a barrier to starting work easily. At the same time, I don't know if i'll be able to get a nice monitor and a decent Mac Pro or iMac in the graduate student offices.

I suppose time will tell. In the meantime, i'm going to continue improving my standing-while-typing posture, and hope that someday down the road I will not have to deal with the occupational hazards of hacker/programmer (as my poor coworker must deal with now).

Thursday, April 15, 2010

A new way to handle email

For the past four years, every day has brought a stream of emails to my inbox. Some days, this stream resembles a trickle (such as on break, or on the weekend). On the weekdays, it often approximates a torrent. My strategies for diverting and handling this torrent of obligations, requests, and information has changed every so often. My goals in these incremental changes of process are to 1) minimize the time needed to find past emails 2) minimize the mental overhead of keeping track of the status of emails, and 3) minimize the time needed to maintain the system to make 1) and 2) possible.

In the past, I had a low volume of email to deal with, so I had a boring method of handling email: upon new mail arriving, I either replied to it or left it alone. After a few days or weeks, I would get tired of scrolling through my inbox. Some of the emails could be easily deleted or archived in a folder, but there were always some emails that were not yet "done". Perhaps they merited a long and thoughtful response that was yet short and ill-conceived; perhaps they gave details of an upcoming event. The easiest thing to do was to use the inbox as a holding pen until these messages became 'resolved'. This strategy worked well until senior year, when I became wrapped up into so many different events, ideas, and mailing lists that my inbox would still be at 30 messages full after being "cleaned".

I've decided to use this new approach, called the Trusted Trio.

Basically, you segment your email into three categories which explicitly model the lifecycle of an email. I'll call these categories "TODO", "In Progress", and "Archive". The first category is for emails that can't be responded to in a minute or two and need more time to be dealt with. This includes emails requiring long responses, some (external of the mailbox) action, or otherwise require me to do something.

The second category, In Progress, contains messages that require a later follow-up, pertain to a future event, or are no longer TODO but not quite dead yet. If you were to have an exchange with someone to set up a lunch, and were awaiting a reply of their preferred times, the entire thread would go into "In Progress".

The final category is for emails that are done, dead, or most likely no longer alive. Depending on the email client, you can organize this however you like: with a tagging-based email client (i.e. GMail), adding subject tags is a sufficient amount of organization. I use Mail.app and MobileMe right now, so I use folders based on what part of my life the email pertains to. The current subfolders include Research, Personal, Shopping, Class.

My main problem with this new approach is that it gets messy with several different email accounts. Right now I have a half-dozen emails and many more redirection addresses (such as @mac.com, @acm.org, and so on). It is difficult for me to funnel all of these into a single IMAP email account, especially when i'm not in front of my laptop. Hopefully when I start at UW, most of my emails will slowly come by just one or two accounts.

Tuesday, April 13, 2010

Concert Review: Miguel Zenón's Esta Plena

NOTE: this review was written as part of MUS 378 Jazz History, taught by Don Seybold.

Miguel Zenón’s Esta Plena. To me, it sounds like the name of a Spanish ballet. I have listened to music from Cuba, Mexico, South America, but plena is none of these: it is the traditional music of Puerto Rico. Still, knowing just this did not meaningfully inform my expectations of the concert. I discerned it was something unfamiliar and different, on account of the unusual number of young people packed into the ground floor of Loeb Playhouse. My seat was towards the rear, and behind me were perhaps a half-dozen young Puerto Ricans. To my left and right were more college students. There was a buzz in the air. It induced students to text and tweet at a furious pace, which added yet more energy and tension with every character pecked.

After the obligatory Todd Wetzel introduction, the band quickly deployed to their instruments and began playing without hesitation or introduction. Hans Glawischnig, the bass player, set up a boisterous Latin beat that enunciated the frenetic energy buzzing in the hall. The six seated behind me possessed the voices of a dozen, whooping, yelling, and otherwise emulating the soundtrack of a dance party in Spanish. As the theme of the first song was repeated, the younger subset of the audience met the beat with a steady clap. This was my favorite style played during the night: raucous, fast, and a distinctly “concert” sound (as opposed to upper-case Concert).

The concert’s theme was an exploration of plena – a music influenced by Spanish and African musical traditions. The main instigators of this sound were the three plenera (hand drums) of various sizes and their players: Héctor Matos (requinto, the smallest drum), Obanilú Allende (vocals and segundo, the middle drum), and Juan Gutiérrez (seguidor, the largest drum). The ensemble providing a jazz counterpoint to this trio consisted of Miguel Zenón (Alto Saxophone), Hans (bass), Luis Perdomo (piano), and Henry Cole (drums).

Merging plena and jazz is a nice idea in theory (on this basis Zenón was awarded Guggenheim and MacArthur grants), but it is a very ephemeral and fleeting moment in practice. On some of the numbers, the two styles were intermixed; others were more theme-oriented, with some themes played by the plenera and some by the piano or saxophone. The height of the concert was the long drum solo that traded attacks with the plenera, yet mostly sidestepped reusing tedious drum solo clichés. The main problem I sensed was that the drum rhythms are fixed and the folkloric quality of the music is much more structured than the floating-in-space harmonic aesthetic I often imagine while listening to modern jazz.

Regardless of the style, all performers were drenched in energy, whether improvising a solo or beating the living crap out of their hand drums in hard-to-imitate polyrhythms. Not since I saw Thom Yorke of Radiohead live have I seen a band’s frontman dance so wildly and without restraint while singing and playing. Miguel’s solos are plain as day to understand: just watch his body wobble about the stage, and match the motions and emotions to the movement in the music. The band’s energy was infectious, and throughout the concert it provoked yelling, clapping, and other concert-worthy (lowercase-c concert) forms of participation.

Unfortunately, only a small percentage of the audience was interested “experiencing” the groove, so most members just sat calmly (as if watching a YouTube video, or in a master’s clinic). Part of this is Purdue’s concert culture: when most of the concerts are sponsored by the local retirement home megacomplex, you shouldn’t expect many people to dance in the aisles. I would have much rather seen Zenón’s septet in a club with a dance floor, as plena music (and Latin-sounding beats in general) are undeniably designed to induce dancing.

To balance out the plena, several more typical jazz songs were also presented. The most pleasing segments of these were the improvisations of the piano and saxophone. None of these songs were terribly memorable for me, and I felt that they bored the P.R. audience as much as the plenera-wielding musicians who didn’t have a single note in some pieces. One exception was a ballad piece, which was quite haunting. It began with a simple riff by Hans on bass, and slowly added in complexity. With each new chorus, Miguel dug just a bit deeper into the theme, and by the climax was dancing passionately with his horn. It reminded me instantly of Bolero – but translated into the context of a jazz ballad. Rarely have I felt more compelled to stand up and clap after the final bars.

The concert was a blast. I’m looking forward to new works by Zenón, and am especially interested to see if he can further integrate plena tradition into the improvisations of jazz.

Monday, April 12, 2010

Quick tip: disabling iPhoto sync

I've always wondered why iPhoto always opens when I plug in my iPhone. It seems counterintuitive, considering Apple's obsession with minimizing options and optimizing for the common case. Chances are high that I don't want to import any photos from my phone when the battery is low, there are no new pictures, or it is way past my bedtime. If I wanted to import photos, I would open iPhoto myself! This has led to constant frustration and interruption of my flow.

It turns out that there is a setting which induces this behavior. It's tucked away into a small application called Image Capture.app. This application is ostensibly some sort of image importing application, but i've never seen it before. Among other things, it allows you to change the action to be performed when a "camera" device is plugged in. The setting is per-device, and you can opt to use any application (or no application). In theory you can even write your own complicated program to decide when to import, but for me the options iPhoto.app and "No Application" are sufficient.

(Note: this is on Snow Leopard. If you don't have Snow Leopard, YMMV)

Sunday, April 4, 2010

Facebook Vacation

As has become customary this year, I'm going on a Facebook vacation for a month or so. This will last until the school year is done.

The point of this is twofold: increased productivity, and a test of willpower. While many people argue to the contrary, Facebook, Twitter, video games, and television are all addictive to some extent, depending on the person. As a test, I stopped using Facebook for a month last semester (November-December) to see if I was a Facebook-addict. Turns out, it was a lot harder to not use it than I thought, but I also became a lot more productive than I had ever imagined (we even submitted a paper that got published!). I concluded that I had been addicted, and since that month of going completely without Facebook, I realized that I was just as happy and had a lot more time. Since then I have made it a goal to limit my use of the service. Unfortunately, that goal has slowly slipped away; traveling for a whole month, while exciting, does horrible things to your productivity and focus.

This time around, I have just as many, if not more, demands on my time. It is harder logistically to go to the library after dinner because I live in Lafayette. I have class all day Tuesday/Thursday, with two 90 minute breaks interspersed. While it is tempting to waste those breaks on Facebook or Google Reader, I can no longer afford to throw away that time. In a little over a month, I will have my final exam, and my Programming Languages project (which is already behind) will be due. Not to mention, I have tens of other things to worry about (research, moving, summer jobs, research, and weekend trips).

I will continue to blog sporadically, perhaps at the same rate I have been in March. Surprisingly, writing a blog post and creating new content is much more interesting than reading about friend's awesome night out drinking, or seeing the latest dumb youtube clip. It also doesn't take as much time, either.

UPDATE: I've decided to keep on using Twitter, because it is much less likely to consume large amounts of my time. That said, I'm trying out a new approach to reading news and Twitter: work first, then relax. I'll only use Twitter after i've already done some work, and refrain from sending tweets during my working periods. I already try to use a similar strategy for email and IM, and it seems to work pretty well. At least, when I don't get urgent emails :).

Tuesday, March 30, 2010

Visit to University of Washington

Over spring break, Steph and I were in Seattle doing a variety of fun things to fill our time. Most importantly, the official graduate school visit days for UW Computer Science (from now on, UW refers to University of Washington, not any strange institutions in the Midwest).

I haven't really written much about the other visit days because I haven't had the time to come up with something coherent to say. All of the visits are of the same form: on the first day, spend your time alternating between short (20-30 minute) meetings with graduate students and potential advisors, and sitting through fire-hose presentations that are not terribly useful if you've read up about the department and school beforehand. The template for the second day (if there is one) is to do more "fun" tourist-like things in the city, and get to know the current graduate students or professors in a less formal context. Some of the things done to this end have included frisbee golf, drinking, snowshoeing, drinking, eating, and drinking.

So, when one gets to the third such visit, everything falls into a familiar pattern, and it can be at times difficult to put on a mask of sheer excitement when watching the 20th powerpoint presentation in as many days. Another familiar pattern was the faces and archetypes of other prospective students: by the third visit, I had already seen quite a few of them at other visit weekends. One such student is Shaddi Hasan, whom with I went drinking for two consecutive weekends in two different cities (Boulder, Seattle). After three weekends, you can pick up the differences between east-coast and west-coast students, who's trying to show off, and who is absolutely mortified to interact with strangers.

Day 1

In the first day, I met with lots and lots and lots of people. Of these people, two were professors (Prof. Ernst and Prof. Notkin) and the rest were graduate students. Interspersed with these half-hour meetings were various powerpoint presentations that showed the current research of a few professors. For lunch, I went with a group to a nice El Salvador-themed restaurant off The Ave. In the afternoon were some more meetings, and then finally after all this, there was a fancy dinner and graduate student party in the HUB (Husky/Student Union Building).

The dinner and reception were probably the better parts of the night; Steph was with me, and was able to socialize with some of the graduate students and get a better feel for who the department is in a social sense. We both sat near Dan Grossman for dinner and listened to his stories of hiking for an entire summer, among other interesting bits. One of the funnier parts of the evening was the CSE Band, which made alternate lyrics for pop and oldies songs and performed them live at the grad student party. I don't remember many other details from the party, but it was a terribly long day and I had been through a bit of beer by that point.

Day 2

The second day, as explained above, tends to be less hectic and more personal. At CSE, I actually spent most of the morning doing yet more meetings with people. This actually isn't so bad, since I was still running internally on Eastern Standard Time (hence, a 9am meeting was actually a noon meeting for me, and all was good). It was at this time that I met with Luis Ceze and Dan Grossman formally. Doing all of these formal meetings can sometimes be awkward, especially if you do not have a bunch of questions in your clip to use as ammunition. At this point, I had already talked to both professors the night previous, and of course have read every scrap of available information on the website.

After these last few meetings, Steph and I went to Hank Levy's house with the Security/Systems group for a nice lunch. His house is at Sand Point, between the Burke-Gilman trail and the water. I was able to speak with Alexei Czeskis, a Purdue CS graduate and one of the star undergraduates that my cohort looked up to as little freshmen and sophomores. After eating, I hung out on the dock and talked to some people about my JavaScript work, and about the challenges of JavaScript security. Later, I returned to the house where Steph was talking to none-other than Mark Zbikowski, employee #55 of Microsoft and architect of NTFS, Cairo, the MSDOS executable format, and other things. Apparently he's returning to graduate school ^_^ He had some interesting advice and stories to relate, and it is interesting to hear advice to a new hire at Microsoft from someone who has been there since the near-beginning.

After this lunch, we went back to the Paul Allen Center and went back out for some fun with the PL/SE groups (Notkin, Grossman, Ernst) for some [indoor] beach volleyball. This was way more fun than I thought it would be: professors diving face-first to make a save, trading jokes with potential advisors, and getting some exercise at the same time. While neither Steph nor me are much good at volleyball, we had a good time getting to know the personalities of the group a bit better. Afterwards, we killed some time in the park throwing around frisbees, and then headed home.

We were way too tired to go out after another full day, so we grabbed some quick food at Thai 65, tried out the hot tub at the hotel, and went to bed quite early.

--

Overall the visit went very well. I learned a lot about the department, the people whom I may be working with, and the current things that are going on research-wise. I was also able to get a sense for what projects will be available next fall, and possibly even in the summer if an internship never comes through. I still have a while before I'll feel as comfortable talking to new professors who don't know me as well, but I'm confident i'll fit right in with the people in Seattle.

In the next post, i'll talk about why Chicago was especially green, the pros and cons of using Amtrak vs. flying, and the rest of our spring break in Seattle.

Friday, March 26, 2010

MSR Video: Research Perspectives on JavaScript

Just today, Channel 9 (the MSDN channel that covers Microsoft Research) posted a video about "Research Perspectives on JavaScript", featuring Eric Meijer as interviewer and the Bens (Ben Livshits and Ben Zorn) as interviewees. This video is particularly interesting to me, because their JSMeter project is highly related to our PLDI paper this June (we even get several mentions in the video). I'll summarize some of the main points of conversation, but will omit some details as the video is quite long (50 mins). My comments/opinions will be interspersed and emboldened.

Eric begins by asking about the names. How do they always come up with funny names like Gatekeeper, JSMeter, and so on? Ben Zorn explains that it is important to pick a good name, because they tend to stick in people's minds better than paper titles (unless authored by Phil Wadler). I agree. That said, I would much rather have a boring paper title than a ridiculous backronym project name, the likes of which are way too common on large projects in the sciences.

Next, the JSMeter project is discussed. The project goal is to find out what exactly JavaScript code in the wild is doing, and how it compares to C, Java, or other languages. They instrument Internet Explorer's interpreter, and aim to measure the behavior of *real* applications that end-users visit every day.

As far as their methodology (as in, what data is actually recorded), it is very similar to what we did in our research. There are some differences: they measure physical heap usage of the DOM vs JavaScript whereas we only measure the objects on JavaScript heap (without respect to physical size of objects), and they measure callbacks and events explicitly. As far as analysis of the data, our approaches diverge according to our goals, but cover many of the same statistics.

The first point Ben Livshits talks about (and their main conclusion in the paper) is their observation that SunSpider and other JavaScript benchmarks do not have much in common with real-world applications such as GMail, Facebook, Bing Maps, etc. The second observation is that function callbacks are typically very short on sites that need high responsiveness. SunSpider is called out for having a few event handlers with very long execution times, which biases against interpreters that handle small functions well (such interpreter behavior is desirable in the real world).

I'm not objectively sure what "short" is, but we did see in our work that function size was fairly consistent with respect to static code size and dynamic bytecode stream length. We could not distinguish function invocations which were callbacks though, unfortunately. I also wonder if opportunities for method-level JIT'ing are overrepresented in SunSpider and (especially) V8 benchmarks.

There was some discussion of whether JavaScript as a language will evolve to be better, and also the tricky questions of what would one add or remove to the language. Ben Zorn points out that JavaScript, unlike Java or C, is usually part of a complex ecosystem in the browser. This means that ultimately, it may evolve, but only slowly and in lockstep with other complementary languages. He also calls it a "glue" language as opposed to a general purpose language, one that mainly deals with making strings and gluing together DOM and other technologies.

I agree that it is bounded by other technologies, but it can also be a general-purpose scripting language (see for instance its use in writing large parts of Firefox, and as a scripting plugin for many environments and games). I think the issue of poorly designed semantics (the root of all trickiness in efficient implementation) is orthogonal to the issue of whether it's a generally expressive and useful language. PHP is another language in this vein (apparently useful, but horrible semantics).

Erik asks about the use of dynamic features of JavaScript by developers. Ben Livshits immediately concedes that many people use eval, and while some of its uses are easy to constrain/replace with safe behavior (JSON parsing, for example), some are "more difficult". But, he does not see this as a very big problem because a lot of contemporary JavaScript code is written by code generators. Ben Zorn explains that with results from JSMeter and our work, researchers and implementors can gauge the impact of certain restrictions (such as saying "no eval" or "no with").

We actually approached it from the other end in an effort to investigate the implications of assumptions already made in JavaScript research. Since our sources and data are freely available on the project webpage, it's possible to go in the other direction as well by tinkering with our framework and replaying executions on an interpreter simulator with different semantics.

Our conclusions are a bit different in this area, as well. You can read the paper for more details, but shortly, we think static typing and static analyses for JavaScript will either be too brittle, too expensive (due to code size), or too flexible to make any useful guarantees. That said, we see lots of room for heuristic-based optimizations, which have already made inroads into implementations of Chrome and Firefox.

We learn that there is a dichotomy between sites that use frameworks and libraries, and sites that use handwritten code. We saw about half of the top 100 sites used an identifiable libary. Script size is also seen to be very large. Erik asks about the functional nature of JavaScript- do scripts often use higher order functions? They defer to our study for quantitative numbers (thanks for the mention) and say that usually it is frameworks and translators that use HOF's (for example, JQuery's extensive use of map/reduce patterns and chaining). Of course, callbacks and event handlers are one ubiquitous use of closures. Ben Livshits talks a bit about library clashes (i.e., different libraries may change built-in objects in incompatible ways), which they did some work to detect statically in other research. I know that Benjamin Lerner at UW has done some work in this space, in the context of Firefox plugins and how to make them play nicely together. He makes an anonymous jab at some news sites that sidestep such incompatibilities by loading separate widgets in their iframes (at the expense of horrible page load times).

Erik returns to the issue of language design: what would you like to remove or add to the language? Ben Livshits talks about subsets of JavaScript, and their usefulness for writing safe and limited code (such as in ads). This approach has been used in several papers, but does not yet seem to have much traction with browser vendors. It would be nice, though. In general, it is agreed that JavaScript needs fewer features, not more. I would start with removing the 'with' operator. Ben Zorn says that nothing needs to be added, because things like classes or other language features can be built on top of prototypes. That said, he is not convinced either way as to the usefulness of a prototype-based vs a class-based object system. Yeah, me neither. He then explains prototype-based objects and ways to optimize programs in this paradigm, such as V8's "hidden classes".

Ben Livshits says that the big strength and weakness of the Web are the same thing: being able to load code and data from disparate servers, and combine them fairly arbitrarily via very late binding. Predictably, the two Bens are at odds over whether this is a good thing or bad thing for the web in the large. On the one hand, having no rules makes it easy for a 12 year old (or a developer with the same skill level) to hack something together because the platform is so tolerant to broken code. On the other hand, this flexibility invites a lot of security problems and ties the hands of those more proficient developers who want more invariants and guarantees about their code. This lack of discipline and control is probably what drives companies to translate large programs into JavaScript from some other language like C# or Java.

One lesson of JSMeter that Ben Livshits talks about is the possible benefits of more integration between the language and the platform. Many times, browsers load the same page over and over, but do not learn anything about how that page actually behaved. Ben's example is that if code only runs for a few seconds, then it is not useful to run the garbage collector (as opposed to other methods, such as mass freeing by killing a process or using arena/slab allocation). Right now, browsers are utterly amnesic about what happened the last time (or 10, or 1000 times) they loaded a page, and only cache the source text on the browser (as opposed to the parsed AST). This is something that jumped out at me as well. Sounds like an interesting thing to look at. They talk about this again near the end.

Erik asks whether the parallel but separate specifications and implementations of JavaScript and complementary technologies like the DOM are necessary. Why not just make one specification to rule them all? Both Bens say that the border is fairly arbitrary, and increasingly applications pay a large price when they cross that boundary frequently. Ben Livshits also says that going forward, it is a bad idea to ignore these other technologies when thinking about analyses and optimizations. They did not suggest any specific methods for such cross-border optimizations, though.

This is a huge problem with tracing JIT's like TraceMonkey, because they have to end a trace whenever it leaves JavaScript for native DOM methods (usually implemented in C++). V8 (Google Chrome's JavaScript engine) tries to minimize the number of such exits by implementing most of the standard library of JavaScript in JavaScript, and using only a small number of stubs. Another approach may be to compile the browser with the same compiler that does the JIT'ing (say, with LLVM) and then there is less penalty for crossing the DOM/JavaScript execution boundary in machine code.

Ben Zorn goes as far to claim that JIT's can only go so far to improve JavaScript performance, and DOM interactions are lower-hanging fruit right now. He bases this on the fact that most scripts are not computation-heavy, either because they are interactive and wait on the user to create events, or spend most of the CPU time inside of native DOM methods. Ben Livshits thinks that one of the biggest challenges is that JavaScript (and web applications in general) are network-bound, instead of CPU or memory-bound. Essentially, download time and network latency dominate any other sources of delay.

I agree on the 'being interactive' part, but disagree on the compute-heavy part. Especially with things like games, the canvas element, and animations in JavaScript, numerical computation is starting to become significant. Furthermore, as JavaScript becomes more and more the 'assembly of the web', I would guess that the CPU time will tilt towards general-purpose computation, and away from DOM calls (which most significantly are used for the V in MVC).

--

It's great to hear at length from some of the other folks doing research work around JavaScript. I'm looking forward to seeing the final version of the JSMeter paper at USENIX Webapps, and also am looking forward to our paper being presented at PLDI in June. Every time our work is presented, we get lots of new and diverse feedback, which raises ideas we have not yet considered and forces us to dig deeper into our understanding and data.

Thursday, March 25, 2010

A tale of three bikes

Spring is (nearly) here, and in Indiana that means it's time to dust off the bike saddle and begin the time-consuming process of tuning, upgrading, and repairing bicycles. Wait, bicycles is plural, right? Let me review the three bicycles that Steph and I share:

  • The fixie: Steph rides a red fixed-gear bike mostly around town. I confess to not knowing it's particular details, as I'm not a fixed-gear enthusiast. All I know is that it would not serve me well on Chauncey and 9th St hills.
  • The frankenbike: My father had an old Trek bike (circa early 90's) hanging up in the garage, so I was allowed to borrow this bike for the school year. I've done some good rides on it (20-40 miles) in the past semester, but it lay largely unused during the winter months due to Purdue's propensity for poor road maintenance.
  • The Felt: Steph bought a Felt Z80 two years ago on a whim after her just-purchased-that-day bike was totalled in a car-bike accident. Unfortunately for her, she didn't know much about bikes and picked one that she no longer enjoys riding. So, I may end up using it as my main road bike.
All three bikes have required some service prior to use this season:
  • The fixie had a flat tire, which needed to be patched. Thankfully, Steph is very good at patching tubes, so this was not a big problem. The chain was also cleaned a bit. It is remarkable how little maintenance this bike needs compared to the other two geared bikes!
  • The frankenbike needed a lot of cleaning: aside from the chain, I doubt that any part of the frame or the components had been cleaned since Bill Clinton was trying to pass health care reform. Of course the chain needed cleaning too, but that is because I rode several hundred miles in the fall semester with minimal cleaning. Beyond last semester's slight upgrades (new handlebar tape and front/back lights), I bought 2 new tire treads and had both wheels trued at Hodson's Bay.
  • The Felt bike has not really been used for substantial road biking, so I have decided to try it out on longer rides to see if I like it. Aside from the dirty chain that I had to clean, there was little to be done to make the bike rideable.
For Christmas this year, I received some more biking gear as a gift from Dad: a pair of biking shoes, and some very nice clipless pedals. At first, I was planning to put these on the frankenbike, but realized that I might as well put them onto the Felt bike if I was going to only use the Felt bike for long rides. (For those who don't know, clipless pedals and shoes work similarly to ski bindings. The main purpose is to keep your foot attached to the pedals for better efficiency.)

It took me the better part of 3 hours to assemble my shoes, remove pedals from the Felt bike and frankenbike, and install the new pedals. For now, I put the cheapo plastic $3 pedals on the frankenbike, but might change back to the old, dirty, metal pedals that used to be on it. Perhaps next time I want to take off old dirty pedals, I should buy a pedal spanner wrench, because it took an unbelievable amount of force to remove the damn pedals with a US 5/8 wrench (not metric 15mm as it's supposed to be).

Once assembled, I went out for a short (16mi) ride up S. River Road in West Lafayette, taking the huge hill on N 500 W (11% grade for about 1200 ft), and returning by Lindberg Rd/Salisbury. Overall, using the Felt bike is much more fun than the frankenbike: it is much lighter, has a larger cassette and 3 chain rings instead of 2, and has a Shimano STI shifter (indexed shifting), wheras the frankenbike has old-school knobs (friction shifting) that you have to manually move up and down and hope the derailer has moved to the correct position.

If you don't know what these terms mean, imagine the difference between fretted and notfretted string instruments. With a violin, you must know exactly where to place your fingers on the string to make the correct noise- this is like how shifting on the frankenbike works. On the Felt bike, distance between gears is fixed (like the distance between frets on a guitar).

My goal for the rest of the semester is to do some sort of exercise at least 3 days a week, on Monday, Wednesday, Friday. If the weather permits, I'd like to do short (20-30mi) rides. If it does not permit, I can go to the pool for a while instead. Eventually, I'd like to be able to do a 50 or 100 mile ride; most of those are in May or later, so I have a lot of time to train and work up on my mileage. I never got much beyond 35 miles in one ride last semester, but I have a lot more time in my schedule for long rides now. Or so I hope.

Tuesday, March 23, 2010

Our WebKit instrumentation and PLDI dataset has been released!

In preparation for the PLDI camera-ready deadline later this week, Gregor and I have finally gotten around to packaging and uploading the files we used in our experiments.

The first set of files is the raw sources of the instrumented WebKit branch, the trace analyzer and static analyzer, as well as the database generation and graphing infrastructure.

The second and third downloads are the traces that we used as our raw data set in the PLDI paper, as well as the resulting database when these traces are analyzed and inserted into a sqlite3 database. These files are fairly large, but we feel that it is important to allow your experiments to be recreated by 3rd parties. One frustration of working in the programming languages research field is that all too often, if a technique or analysis is tested by an implementation, the corresponding sources and datasets used are not made publicly available. This makes it impossible to verify results, look for possible improvements, or find inaccuracies in papers. We want to do our part to combat this trend by providing everything that we used in the development of our work's conclusions.

The files are hosted at Gregor's Purdue website: http://www.cs.purdue.edu/homes/gkrichar/js/

For those keenly interested in building and running the project themselves, there is a somewhat useful README in the top-level directory of the source tarball. Happy hacking!

Friday, March 12, 2010

Code Bubbles (aka super-Eclipse)

This post begins a hopefully-weekly blog post about research that I have recently read or found out about. Posts may be written sporadically, but hopefully can be posted on a regular schedule.

Over in software engineering-land, a new paper at ICSE 2010 (May 2-8, Capetown, S. Africa) unveils the idea of Code Bubbles.

If you go to the site, there is a 1080p YouTube demo of the idea. Code Bubbles is an IDE where related bits of context can all be viewed together spatially, regardless of source file or class. These independent code snippets are graphically in separate "bubbles", and can be grouped together according to task. Additionally, things like emails, notes, Javadocs, and other assorted content can be loaded into a "bubble" and grouped with related content. The whole system seems to take the concept of "working sets" to its logical conclusion.

One common task in Java IDE's such as Eclipse is to read some code, recursively look up some declarations of types or methods, and then write some new code that incorporates all of these different types and methods. A similar process happens when a developer tries to find and fix a bug: starting from a stack trace or error message, one must trace through the execution of the program, possibly through many disparate classes, methods, and so on. In Eclipse and similar IDE's, this task involves opening many files as tabs in the same workspace; after trying one particular program trace, the windows must be closed individually, and if one finds a good combination of views that expose the bug, there is no way to serialize it so that others can also use the view.

Code Bubbles could potentially make the above situation much nicer: it has debugger support, which can automatically open relevant code snippets from the call stack at any point during execution. One of the best features in my mind is the ability to import and export working sets (groups of bubbles) by email. Coupled with some design explanations, a bunch of pre-determined snippets of code could go a long way towards documenting the architecture and important parts of a complex program. Other cool features I noticed include
  • Traditional class/method browser- from which, any method can be pulled out into its own bubble.
  • User queries- I didn't get a good idea of how this worked, but it seemed to be a fulltext search on method, variable, class, type names (similar to the Awesome Bar in Firefox).
  • Labeling of bubble groups and workspace areas
  • Zoom in/out for rearranging groups

Just looking from the video, i'm quite impressed by the amount of polish present (for a research work). Their framework is built on top of Eclipse, and uses its backends for the dirty work (parsing/syntax highlighting, static analysis, editor UI). I'm interested in the details as well- they will post a PDF after the conference is over in May.

• • •

I think there are plenty of exciting directions to go from this work. For one, it would be useful to find out through practical experience when this sort of interface is useful and when it is not. My guess is that this will be great for discovering a new codebase or hunting for bugs, but not as useful for writing new code. It also seems to require a 24"+ monitor to really shine (which is fine, because there is no other practical use for such large desktop monitors).

One idea that struck me is that since the workspace is so large, what would happen if multiple people could simultaneously share the workspace and see each other's changes in real-time? I don't know if such a scheme would work as well in an Eclipse-based application (cf. web-based collaborative editing such as CodeCollab and collabedit, or systems like SubEthaEdit built from the ground-up to support distributed/collaborative editing). It would also pose interesting (read: nontrivial) questions about the integration of version control into such a system, especially if there is no notion of who "owns" a certain version of a file.

The last point I want to address is the interesting yet puzzling response to the project on the rest of the Internet. On a Slashdot story about the project, the overall responses were neutral to negative. On a similar LTU story, almost all the responses were positive. This is surprising to me, because it is usually the reverse: LTU'ers complain about IDE's reinventing Smalltalk, while Slashdotters coo over pretty graphics. I guess the one thing that helps to explain the discrepancy is the use of Eclipse and Java, which (somehow) has a better reception at a programming languages research blog.

Tuesday, February 16, 2010

Graduate School Accepts and Visits

UPDATE: Accepted by Maryland today. Haven't decided whether to go on the visit (weekend after spring break ends)

In the past few weeks, I have heard back from all but two graduate schools: I'm accepted at University of Washington, University of Texas-Austin, University of Colorado-Boulder, and UCLA. I have yet to hear back from University of Maryland and Cambridge (to which I applied for the 1 year master's program in conjunction with the Churchill and/or other fellowships).

This is a huge relief; no longer do I need to worry about what my choices are, just which to choose. To that end, I will be visiting the first three schools mentioned above in the next month.

UT-Austin: February 26-28
Boulder: March 4-7
UW: March 13-21*

*Okay, that last date may look a bit strange. In truth, I will be spending all of spring break in Seattle with Steph, and the visit days happen to fall over Purdue's spring break. We'll be taking out the Amtrak to Seattle, so we'll actually only be in town from Monday-Saturday.

Unfortunately, UCLA's visit day falls on the same time at Boulder's, and Boulder has a 3 day visit (vs. 1 day at UCLA). I still haven't heard back from Maryland for whatever reason (still snowed in?) so I can't really commit either way to visiting their school. And Cambridge.. I don't think they even have a visit day.

During/after each trip, I will write up a trip summary and present some information about the graduate schools I visit. Bon voyage!

Saturday, February 13, 2010

Project website, and MobileMe testing

Today I decided to try out MobileMe, Apple's in-the-cloud syncing and webspace hosting service. I get two months for free, and will probably subscribe at the end of that time. This is mostly out of convenience: perhaps days after my graduation from Purdue, my CS account (and my email accounts at Purdue) will be zapped. I figure it is better to start transferring now to a university-neutral host for my website and non-school-related emails.

As part of this, I moved my iWeb-backed research webpage over to www.brrian.net. Over the next month i'll slowly start moving files off of the CS account's public folder to my public iDisk (aka cloud-based storage).

I've also finally gotten around to making a website for my CS 565 project, now located at www.brrian.net/js/. I will be keeping all news related to that project on a mini-blog specific to the project. I don't anticipate making more than a dozen entries over the semester, so it is not worth the work to set up a new Blogger blog, or to write such blog posts on this blog and try to link them each individually.

Finally, you can access this blog via the shortcut blog.brrian.net. I plan to use this brrian alias more often, since I do not know what my username will be at my graduate school.

Monday, February 8, 2010

I'm published!

Last week, I found out that my paper was accepted at PLDI 2010. This was a joint work with several others at Purdue last semester, including Jan Vitek, Gregor Richards, and Sylvain Lebresne (who has since returned to France and found employment). The paper itself was submitted almost 3 months ago, in mid-November. We received our first round of reviews in the second week of this semester, and now things are wrapped up. The conference itself is June 5-10 in Toronto, which is dangerously close to the beginning of internships; hopefully I'll find a way to fly out there for a week.

PLDI was extremely rough terrain this year: out of 200 papers submitted, only 40 were accepted. Though 40 is a relatively high number of papers, this still results in a measly 5% acceptance rate for the conference. Two papers from Purdue were accepted. This is significantly above the overall 5% accept rate, but still many did not make the cut. That said, I'm really excited about some of the papers that have been accepted this year. I'm especially excited to see some of the new papers on verified compilers (Jean Yang, Zach Tatlock) and the profiler analysis work from Amer Diwan's group.

This is a major milestone for me: my first publication! I'm relieved that I actually finished a project, after a year of having no focus and research direction in Japan. Besides the ins and outs of research, I learned a lot about writing papers, moving fast, and gleaned at least some insight on the tricky problem of finding (and answering) the interesting questions of research. Unfortunately, most of the fellowship foundations and graduate schools have already looked and decided on my applications, so they will not be able to see my updated CV. At least I can update my website after applying... :)

Just as a preview, our paper "An analysis of the dynamic behavior of JavaScript programs" is somewhat of a meta-analysis. Over the past few years, many people have published papers about JavaScript. These generally are add-on type systems or static/dynamic analyses that try to make JavaScript a safer (or at least more predictable) language. JavaScript shares a number of similarities with object-oriented and C-family languages, but it also has a number of crucial differences. These include prototype-based inheritance (like Self), closures, and objects with flexible sets of fields. Many of the published analyses for JavaScript assume its behavior is similar to other languages with class-based inheritance; similarly, other papers assumed that language features such as `eval` and field deletion were rarely used. By tracing the execution of real-world JavaScript applications (Gmail, Facebook, etc), we show that these assumptions are often violated. We also produce some data that may be useful for JavaScript implementors.

Over the next few weeks we have some minor editing to do, and i'll be busy with some other project. Not to mention interviews for internships, grad school visits, and classes!

Monday, January 18, 2010

Engagement, and my dislike of Facebook

A month or two ago, Stephanie and I decided to get engaged. I am not a fan of tradition for its own sake, so there was no grand gesture on one knee by either of us; instead we came to the consensus over time that we were both expecting to follow the other indefinitely. I was not sure at first that this was wise because of the many unknowns involved in applying to graduate school; once we discovered that the other was willing to compromise regardless of admissions outcomes, the deal was sealed. We are getting married. To my surprise, that has been the easiest decision yet.

The particulars of when, where and how of a wedding happens can be the subject of much debate; even the step of announcing an engagement can be done in many different ways. I tend towards being quiet and unscripted about such attention-grabbing announcements, while Steph has many more ideas of what should and shouldn’t happen. Initially we planned out a surprise announcement- this was axed on the grounds that Steph’s roommates were able to easily figure out something was up. Her best friends mostly knew, but we were able to keep it off Facebook (lest the whole world know before our relatives, or worse, that relatives learn of the engagement via the impersonal HTTP protocol). So, rings patiently waiting in their leather boxes, we tried to come up with a more timely and practical way of "breaking the news".

Next was telling friends and family; we planned to do this at Steph’s graduation reception, but weren’t sure whether my family could make the trip around the lake so close to finals week at university. At the same time, we didn’t want to tell my family at Thanksgiving in person and face driving to Wisconsin for the sole purpose of telling Steph’s parents, or worse, telling one set of parents well before the other. Such a lag would be sure to stir resentment or feelings of favoritism. Steph informed her parents by herself. She outright told her father while driving, and delivered a rose to her mother at work. Once Steph’s parents (mother, in particular) knew, we had limited time until it was tweeted, facebooked, and blogged (okay, maybe not) around the world. With this in mind, we had a Skype battle royale featuring Brian+Steph vs Brian’s Parents. They were very happy for us; over the remainder of the semester we began telling nearly everyone about our engagement.

Except a certain Mr. Facebook. Before about 5 years ago, one did not have to worry about an engagement being disseminated over the internet equally to your relatives and distant acquaintances (unless you are of substantial celebrity, which does not apply). In the present, one must be extremely careful to mitigate the embarrassing situation where your uncle, cousins, former BFF’s and others learn of an engagement in the [re-contextualized with extra thumbs, blue boxes, and inane comments] setting of Facebook. While it is somewhat nervewracking for me to tell people about being engaged, at least I have some control over the message. Some people scarcely remember whom I was dating, the gender of my partner, or that I was dating at all. Others are intimate about our love story and inquire regularly. It’s just better to have control (or at least blunt the impact) of your message, especially when its one you will [hopefully] only deliver once. Recall the last time you heard about an old friend’s death over Facebook. That guilt of lost communication and of being a lame friend is something that I want to escape. Perhaps the only solution is to exit Facebook, and I have considered it several times.

--

Epilogue.

Steph is now working this semester at Purdue, and we have moved into a new apartment together. Planning for the wedding is mostly deferred for later, because I don’t yet know what university (or country) I’ll be at in a year’s time. In the meantime, I am finishing my final semester at Purdue, and we are slowly merging our formerly separate ways of life into a single household. It’s harder than it would seem, but nonetheless enjoyable.