On role models and hypocrisy

I don’t know where it started nor why it persists so strongly. I could believe an argument that it makes a great headline and in this age of 24/7 breaking news, such stories are easy money. I could also believe an argument that we are horribly insecure about our faults and when faced with someone that inspires us, we are secretly relieved to find reason to let ourselves off the hook.

 
Of course I’m talking about when the role model falls. 
 
What is odd about the scenario is how we got to a place where we expect our sports heroes to be examples of great moral fiber or our political leaders to be without past transgressions. There is a moment when humanity catches up with everyone, but for some reason, our role models are expected to ascend above this.
 
We have lost the literal sense of the phrase role model and exchanged it for person model or life model where any noted imperfection is a disqualifying mark. 
 
I can’t say I don’t agree that those in highly influential positions should be held to a higher standard, and I am pleased when the role model does the right thing and disappointed when they inevitably do the wrong thing. That said, I do not think that mistakes – no matter how grave – affect the reality of the role that was once admired. I don’t know why you would celebrate a tech leader’s failed project as the sign they are a fraud. A religious leader’s moral failures, a brilliant mind’s ignorance, an overachiever’s regression to the mean. 
 
It seems to me to be both more useful and more human to approach these situations with more heartbreak and humility for the fallen. That we should not mourn the death of our heroes but share in the pain of their stumble and root for them to pick themselves up again.
Posted in Uncategorized | Leave a comment

Weddings

In June of 2007, my wife and I were married. The year or so before that, then, we were planning for the wedding. For those who have not been through this process – or to remind the ones who have- creating such an event is a ton of work and a series of difficult trade offs. The first and most painful trade off, after the date/location, is the guest list.

We wanted our wedding to be a celebration of all the people in our lives that had made us who we were. We also had a budget we could spend. My mother wanted to be able to share the day with her old college roommate. I had a pretty large group of friends from high school. We also had a good size list of college friends. In the end, as you remember, there were a few friends… several actually… that were not invited. I still feel really bad about this as I know how much it would have meant to them to be invited. And I know how much it hurt them not to be.

But I felt I should honor my mom by letting her friend come and the closest members of our extended family. This event was to celebrate the path that brought us together and our families were a huge part of that. The choices were hard, but they were right. And if I had to do them a hundred times, I expect the same results each time.

This came to mind today as I saw a tweet about conference organizers actively seeking out women speakers. I dont organize a conference and every event has different goals, but it seems to me this is very similar. Each conference wants to show off the advances in the field, or put forth a diversity of challenging ideas, or to simply blow attendees minds. And all have a budget for time slots and money. Seems very natural to me that an organizer would want to ensure there was proper representation from each group that makes up the community. The choices are painful and it sucks knowing that there are good friends who will have their feelings hurt, but it’s the right choice.

Posted in Uncategorized | Leave a comment

permissions and hypermedia and discoverable apis

Toward the end of the day today, Alex and I were discussing permissions, usability, and keeping the API dry. The problem is that you have permissions that are part of your app (duh) and these permissions are not the same for every user (duh). Well usability dictates that we should hide unusable actions and unviewable pages (duh). But we want to have a nicely organized API that restfully describes resources. We also want to use that API code for our web tier and we don’t want to have some /classes and /permissions http requests for every web page we render. 

Then I finally realized the connection between hypermedia and discoverable restful apis. All the times I’ve read descriptions of discoverable urls in Apis, all I can hear is “Soap! Wisdl! Auto generated code!” And then I lose interest because that’s not my style. What I miss, though, is that HTML pages have this capability built into them. This is great as it allows links between pages forming the whole web thing, but it also gives a pretty intuitive declaration of your permissions model. So including (or specifically excluding the proper API end points do provide all that discoverability stuff, but it also just lets you w express your permissions model in as intuitive a fashion as you can in your HTML code.

Posted in Uncategorized | 1 Comment

Font face and photos

One of the interesting problems we ran into when using the masonry layout on remember was the effect it had on perceived page load time. The first version naively waited for all images to load before laying out the “bricks” which was pretty unacceptable with the number of user-contributed images we have. If you are unfamiliar with the masonry layout, the engine starts with a block and then places it in the least-tall column. Placing the block increments the column height by the block’s height, so if you don’t know the real height at placement time, you’ll end up with overlapped blocks.

Given this knowledge, the simplest way to overcome this is to know the width and height of the images before loading. If you know how tall an image will be, you can set its dimensions prior to the image loading which adequately avoids our overlapping problem and let’s us load immediately on domready. Implementing thishaws great, but there were some times we were still getting overlapping boxes.

I was able to faithfully reproduce the issue by turning off the cache in my browser, so then it was a matter of logging and break points to determine the cause: fontface. While a remote font is loading, the text does not render and the empty area that serves as a placeholder for the text isn’t the eventual correct height. The first instinct is to point at line-height, but a variable character-width/kerning can easily change the number of lines the text will use and that information is only available when the font has loaded. In my experimenting in Firefox, I wasn’t able to manipulate the placeholder area by changing the font stack. I was hoping it was rendering the text with the fallback font and just hiding it from view. This would have let me choose a font that closely matches the spacing characteristics but simply changing the stack didn’t seem to have much effect on the size of the placeholder white space.

So we’re back at the masonry mechanics where we can safely layout the bricks when we know the height of all the bricks. Unlike the images, waiting on a single font doesn’t seem like it would be so bad. We can just use typekit to load the font which allows us to subscribe to a loaded event where we can trigger layout. What I found, though, is that fontface is prioritized differently than other assets. Fontface as a presentational element gets put at the end of the queue for assets to load over the network. This means it comes after all those user-submitted images we so elegantly sidestepped earlier. I did read that browsers will not download the font till it encounters a CSS rule that used it, but even trying to force it using hacks from Paul Irish’s excellent fontface write up didn’t change the ordering. [todo when not on my iPod: take screenshots of waterfalls]

So in the end, we had to ensure our fonts in the masonry layout were local/synchronously loaded or bypass the load order by only loading images after the font had loaded. The latter strategy did sound like a fun experiment but I’m afraid it’s too brittle for too small a gain, so we chose new fonts. Not the happiest of endings but we were able to avoid waiting for all the images to load prior to showing the page.

Posted in dev | Tagged | 1 Comment

Observations on a car ride

During jsconf2012 Brian Ford urged everyone to go read Thinking, Fast and Slow. I had a 6 hour car ride ahead of me, so I downloaded the audio book and listened to about the first thirD (it’s about 20 hours). So far it’s been incredibly interesting especially since I have a thing for the way we think, etc, but the most fun I’ve had so far has been discussing it with Kathryn the next day in the car.

The basics of the book detail the operations of two styles of thought, how they interact, and when they step in to solve problems. Go read the book to learn about it, but for this discussion’spurposes what you need to know is that “system 1″ is intuitive thinking and can be kind of a jerk. “go to hell” is an emotional outburst that comes from system 1. System 1 is also the system we spend most of our time in, as we drive a car, solve simple math problems, and ehold carry on basic conversation. This is because system 2 is responsible for complex/rational thought that requires extra energy (yes you literally pay attention with glucose) and thus is as lazy as it can be. Finally, system 2 will step in to override system 1 at times, but it can’t step in when it’s already busy doing something else.

So that’s the setup, now for the observations. One of the first observations Kathryn made was that this explains why so many fights occur in the car. When having discussions, you often engage system 2 but when traffic conditions worsen priority shifts over to that task leaving system 1 free reign on the conversation without proper self control.

The second observation is the explanation of the phenomenon we have described as “this sucks – you’re my enemy.” What we mean is when some crisis occurs we have observed you are much more likely to turn on the other person as if it’s their fault for the crisis. Note most of the time, the turning isn’t directly over the crisis but instead over something petty like tone of voice or some super minor inconvenience. This now makes much more sense as in a moment of crisis, your rational self is more likely to be concentrating on solving the crisis instead of stopping system 1 from being a jerk.

My lone observation was more of a connection and a question, but I am really curious to know. at some point in the past year, we were listening to the excellent radio program and they had a theme of loops (http://www.radiolab.org/2011/oct/04/). During this episode, they talked with a woman who at one point in time went into this pattern where she would ask what day it was, note her disappointment in missing her birthday, and proceed to hold the exact same conversation until she basically reset and asked what day it was. The segment is particularly eerie because as long as her daughter gave her the same answers the conversation continued identically as if underneath it all, we are but simple machines responding to inputs. What I was curious about is that knowing system 1 is oftentimes in control of simple conversation, if for some reason system 2 were inhibited, would we all go into a loop like this? Is it merely system 2 stepping in to say “we’ve already asked this question?”

Posted in Uncategorized | Leave a comment

jsconf followup

I am consistently delayed in writing the things I want to write, so it should come as no surprise when I give my post-jsconf reflection over a month after the event itself. That said, I really wanted to get these thoughts down, so I have decided to opt for better late than never.

jsconf.us 2011 was held in the Portland Art Museum in Portland, OR. The venue itself was simply amazing. The food was probably the best I’ve had at a conference, and the after parties were a blast as well. My wife accompanied me to the conference thanks to the allure of Portland and the promise of a fully-sponsored significant-others track (thanks to Jupiter consulting), and she absolutely loved the whole thing. I’ve been trying to get her to understand what about the JavaScript community resonates with me so clearly (the unicorn and pirate allusions apparently were unconvincing), but I think she gets it now. JavaScript has a community that is as irreverent and fun-loving as it is technically excellent and engaged, and that is special and worth celebrating.

These are points that I fully understood going into the conference, but it was actually at the conference, at Andrew Dupont’s “Everything is permitted” talk, where I had my epiphany and the last piece fell into place explaining why I love JavaScipt.  I realized that community is somewhat of a language construct.

For context, when I graduated college I immediately jumped into .NET programing on a WinForms application. I’ve been working the same job and continue to work there to this day, though the work has shifted from installed applications towards web for me. As a developer on the Windows stack I have enjoyed an amazing IDE in Visual Studio where I can rename functions/variables throughout my application without batting an eye.  I have benefited from an amazing and passionate local .NET community in Indianapolis that has inspired, mentored, and pushed me in my development.  Still, in c#/vb, the code I wrote in my WinForms app was at worst making my little module worse so if I wrote bad code that was on me and on me alone.

JavaScript, on the other hand, is a dynamic language allowing extension of the building blocks of our code in a way that can completely change everyone’s code in your application in addition to your module.  Because of this flexibility and power, the community has stepped up to provide clear best practices and rules to guide developers towards a more successful experience.  This community involvement, though, is new to me.  In the statically-typed world, the compiler tells me I’m not allowed to rewrite String, that I must disambiguate if I’m importing classes with the same name, and if I’m using my own version of some system-provided classes, the linker will associate my module with my special classes and other modules with the system classes.

Normally, when you see that in the same post as “c#” and “js,” it’s in a rant, but I love it.  I think it’s beautiful that the flexibility of the language requires you to be mindful of those you will affect.  This is where Zakas’s famous “don’t touch what you don’t own” rule came into contention as Andrew argued we should frame the community mindset in terms of social polite and impolite actions instead of moral right and wrong actions.  These objects are shared and social norms have proven to be effective ways to manage shared property.

So there I sat, mind blown by this context switch when Mary Rose Cook got up to talk.  Her talk began as a deep dive into the world of a game developer and ended as a beautiful story about code as art.  Normally, when we engineers take on a project, we start by researching similar implementations and best practices, but she wanted to approach this personal project through a more intimate process of discovery.  She even likened it to lovers discovering how to make love- a metaphor I found remarkably beautiful.  From that point, she shared an anecdote where the interaction between her code and her life moved her to write a song.  She also shared how she solved a hard problem in her game while partaking in this particular drink she loves and how now that drink is forever tied to that piece of her code and vice versa.  It was a story of totally putting yourself into your work and discovering your work in yourself.  Ahh, it was just amazing and you should watch out for the video when it’s posted.

The next talk I deemed important enough to live tweet was Rebecca Murphey’s Modern JavaScript talk.  This talk was about the modern JavaScript community and how we need to proceed to ensure a brighter tomorrow.  I think the biggest aha moments for me was when she noted how there is more social reward for building something that is new than there is for improving what’s already there.  This was further emphasized by the fact that her previous point was about the need for community outreach to show everyone the way.  No one likes to write documentation, so there are many challenges facing the community.  It was interesting, also, to see rewardjs be announced not long after this talk.

The final talk that I really loved was Brendan Eich and Jeremy Ashkenas on js.next, coffeescript, and prototyping language features.  I’ve often thought that coffeescript was a language written so that Ruby people wouldn’t feel like they were writing JavaScript, but this talk helped me understand that’s not the intention at all (I do know a good deal of Rubyists who feel that way, but this is beside the point).  Coffeescript didn’t just take JavaScript and remove the {}s and ;s but actually created a powerful language that builds on the idioms of JavaScript.  Personally, I had never heard about expression assignment in coffeescript which is a really elegant extension of the “everything is an object” simplicity in js.  But the talk was even more interesting as it turned from “look what I did” to “look what we should do.”

JavaScript is a very flexible language, but growth has been stunted by a need to maintain syntax with older browsers.  Coffeescript found a very interesting way around the syntax problem by “transpiling” into JavaScript, but this gets really exciting when you realize that this means there is a path forward gaining new, more powerful syntax without limiting  the reach of your website.  In fact, it was immediately after this talk that Google announced their own transpiler traceur and explained its usage.  Couple this, Mozilla’s narcissus, and Ashkenas’s call to build your own JavaScript, and the potential for cowpath paving seems extraordinary.  I really enjoyed the comment that “JavaScript is too important to be left to the experts.”

And then the closing party came.  We were told we were supposed to try Voodoo doughnuts when we were in Portland, so I was amused to learn they were being featured at the party.  We had to get up before 5am the next morning to catch our flight, so we didn’t shut the party down.  Still, when Laura was walking around and asked Kathryn and myself to get the dancing going, we delivered, as we always do.  So that’s the quick wrapup.  There were many other interesting frameworks released, tools discussed, and people met as well.  It’s such a great conference, and I am so glad I was able to go this year.

Posted in dev, javascript, jsconf | Tagged , , | Leave a comment

How to use the new jQuery.sub to protect yourself from monkey-patching plugins

You may have heard that jQuery 1.5 has a rewrite of the ajax module making it easier to extend the built in functionality. You may also have heard that the change caused an issue when using the jQuery.validate plugin. This issue is that when using the validate plugin, many ajax requests stop working properly (try to go over JSONP).  Ironically enough this is because the validate plugin needed to extend the functionality of jQuery’s ajax module, but it did so by monkey patching the ajax method, making several assumptions about the api along the way.  On down the line (jQuery 1.5 for example), those assumptions are proven to be incorrect, something changes, and your code is broken.  It’s a pretty good case study for those in the “Don’t modify objects you don’t own” camp.

If you’re wanting to use both 1.5 and validate, fear not, jaubourg (author of the ajax rewrite) has forked the validate plugin adding in the minor change to not break ajax.  This plugin uses feature detection so it actually works against 1.5 as well as 1.4, so you can just use it as your master plugin.  It is my opinion that getting the corrected source is the best way to handle this conflict.  That said, there’s another feature in jQuery 1.5 called “sub” that I thought would be interesting to explore, and in this post I will describe how to use it to work around the existing broken jQuery validate plugin.

jQuery.sub is a method designed to provide a means to allow naming flexibility in your apis without risking naming collisions and to provide a means to safely monkey patch jQuery.  Now you may be thinking that I could rewrite validate now to use this new method, minimizing the code that is changed to validate.  I find this less interesting, so I didn’t do this.  Let’s say, though, that you have a strict requirement that you must use the jQuery.validate plugin off the Microsoft cdn and there’s absolutely nothing you can do about that requirement.  You must use the old code.  Well, you can actually do this safely using jQuery.sub and a little fancy footwork.

What you do is use sub to create an evil twin for jQuery and pass that twin to jQuery validate.

<script type="text/javascript">
(function () {
   var old = jQuery,
   jQuery = jQuery.sub();
   jQuery.oldjQuery = old;
 }());
 </script>
 <script src="http://ajax.aspnetcdn.com/ajax/jquery.validate/1.7/jquery.validate.js" type="text/javascript" ></script>
<script type="text/javascript">
 //move validate over and switch back
 jQuery.oldjQuery.fn.validate = jQuery.fn.validate;
 jQuery = jQuery.oldjQuery;
</script>

Then, after validate has been added, we steal the validate method off our twin (in the code, there’s actually a superclass property that would do all the oldjQuery storing for me, but it’s undocumented) and switch in the real jQuery.  Essentially, this is the same thing as rewearing a dirty tshirt by flipping it inside out.  It’s “okay” as long as the dirty outside of the shirt doesn’t touch your clean body (Did I mentioned jaubourg’s fork?).  Validate was only able to monkey patch our evil twin, so our non-validate code is in the clear.  Additionally, validate will always access the evil twin via a closure, so it will continue to work as expected.

Validate, like pretty much every jQuery plugin, passes in a reference to jQuery and references that object in the plugin.

 (function ($) {
    //internals go here...
 }(jQuery));

This value of “jQuery” is frozen in time because it is a local value passed into the method.  If the author had instead chosen to access the global jQuery, this hack would not have worked.   Because it is a method parameter, though, all the internal code will point to that value of jQuery via the various closures in the validate code.

Now, there is still one remaining issue in the code as remote ajax validation (such as “username must be unique”) is still broken.  The problem is that validate assumes jQuery.ajaxSettings contains all the default values for jQuery.ajax.  This isn’t the case, though, so it ends up accidentally passing in information that indicates it wants to use JSONP.  To work around this, we can just change the jQuery.ajaxSettings of our evil twin so that it satisfies the expectations of validate.

<script type="text/javascript">
 (function () {
   var old = jQuery,
   ajaxDefaults = jQuery.extend({}, jQuery.ajaxSettings);
   delete ajaxDefaults.jsonp;
   delete ajaxDefaults.jsonpCallback;
   jQuery = jQuery.sub();
   jQuery.oldjQuery = old;
   //set up defaults with no jsonp
   jQuery.ajaxSettings = ajaxDefaults;
 }());
 </script>
/*...*/

The important thing to remember is that jQuery.sub uses mixins to create its subclasses instead of using prototypal inheritance.  This means that validate is passed the same ajax method that exists in real jQuery.  To put it another way, both jQuery and evilTwinjQuery.hasOwnProperty(“ajax”) and jQuery.ajax === evilTwinjQuery.ajax.  It also means that if it were to add in extensibility points such as through css hooks, it would add in those points into real jQuery as well (so maybe the evil twin metaphor isn’t so apt).  What happens after the monkey patching, though, is evilTwinjQuery has a new ajax method that then calls the real jQuery.ajax method.  Also, it has a different ajaxSettings that it uses to call the real jQuery.ajax.

Finally, in the case of validate, it’s not a terribly big deal to use this hack as validate returns a non-chainable object specific to validate’s api.  Had it returned the jQuery object from the method, then you’d need to be more aware of when you were expecting to talk to jQuery or evilTwinjQuery.  Either way, this thought experiment gave me a better idea of how jQuery.sub works and what it can and cannot do to insulate you from plugins.  Final script is below and you can also see the code with my test “services” in this gist:

<script type="text/javascript">
 (function () {
   var old = jQuery,
   ajaxDefaults = jQuery.extend({}, jQuery.ajaxSettings);
   delete ajaxDefaults.jsonp;
   delete ajaxDefaults.jsonpCallback;
   jQuery = jQuery.sub();
   jQuery.oldjQuery = old;
   //set up defaults with no jsonp
   jQuery.ajaxSettings = ajaxDefaults;
 }());
 </script>
 <script src="http://ajax.aspnetcdn.com/ajax/jquery.validate/1.7/jquery.validate.js" type="text/javascript" ></script>
 <script type="text/javascript">
 //move validate over and switch back
 jQuery.oldjQuery.fn.validate = jQuery.fn.validate;
 jQuery = jQuery.oldjQuery;
 </script>
Posted in dev, jQuery | Leave a comment