Look, I’ve been a faithful Mac user for over a decade, having been a PC/Windows user leading up to that point. No, I didn’t switch because I hated Windows. I’m not one to attach religious zeal to the computers I use. A company I joined happened to be a Mac shop, so I started using Macs.

Admittedly, Macs at the time were much easier to use than Windows. Windows was easy for me because I knew all the shortcuts. BUT the big issue with Windows at the time was that many things required manual intervention. Take software installation, for example. With Windows, it was a 50-50 proposition that a driver would be available for some software, and you’d end up having to download the correct driver from the manufacturer’s site. With the Mac, when you’d install software, it just installed. Any required driver was downloaded behind the scenes!

To say I was relieved not to have to think about drivers and the underlying system was an understatement. Some people might argue that if there are problems with the Mac, many things aren’t as accessible. Hogwash. You can open up Terminal, type in some commands, or you can reboot with certain key combinations, and many times, the problems would be resolved. It was much like troubleshoot stuff from the Command Prompt in Windows.

Usability-wise, at the time, the Mac definitely had an edge. Things were just so much easier to get at, and the UI promoted a certain flexibility to put things where you wanted to put them, and arrange them the way you wanted. That took a little time for me to adjust to that because I was so used to the hierarchical file system of Windows.

But Windows started to evolve… Over the years, I began to see a convergence – especially with respect to usability – between the two systems. It wasn’t enough to make me switch back to Windows, but I had to admit that I was pleasantly surprised by Microsoft’s movement toward better ease of use. Enter Windows 10 and the Surface Pro…

In my job, I work exclusively in the Web UI, building applications meant to be displayed in the browser. A few months ago, the question came up about our application’s performance in Windows, and I rather sheepishly said that we didn’t test on Windows as we didn’t have any Windows machines in-house. So I was given the permission to get a Windows notebook, and after doing a bit of research, decided upon the new Surface Pro 4. I got the mid-range model with 8GB RAM and 128GB storage. For what I do, that’s plenty, and I wasn’t going to fill up the machine with pictures, as this was to be a work machine.

To make a long story short, after using the machine for the last few months, I think I’m going to switch back over to Windows. For me, in the most meaningful ways (at least to me), Windows now is just as easy to use as the Mac. I have the exact same tools on Windows that I have on the Mac as far as desktop apps are concerned, and since all the services I use such as JIRA, GitHub and JSFiddle are all Web-based, there’s no difference.

Now all things being equal, if I was doing a head-to-head comparison between OSX and Windows 10, I’d probably not even consider making the switch. But with Windows, Microsoft has finally created an ecosystem of both hardware and software, where there is a seeming seamlessness between the two, much like we see with the Mac. Of course, it’s still Windows sitting on top of a machine, BUT it feels much more like a marriage as opposed to a simple pairing.

Still, that seamlessness wasn’t enough to compel me to make the move, because again, it just meant parity with the Mac. But what pushed me over the top was the touchscreen. Being able to click on links or buttons, and scroll windows by dragging directly on the screen are HUGE improvements in usability. While browsing, swiping back and forth between web pages makes the experience so much more enjoyable – you can’t even do that on the iPad! And the screen resolution simply rocks the house! Even my brother, who is a Mac addict commented on how good my screen looked with the pictures he had taken on a recent weekend get-together.

With the stylus and detachable keyboard, the conversion to a tablet is incredible, with the added advantage of that tablet being a full-blown computing device, with all the performance that you’d expect. Mind you, I love my iPad 2, but to me it’s more of an entertainment device that I keep by my bed, as opposed to a serious computing device.

And though it may seem like a small feature, Cortana rocks! Yeah, yeah, Apple folks will mention Siri – but where is Siri on the Mac? I use Cortana – A LOT – and she works great! I have her open apps, search for things – much like one would use Siri. It’s clear that Cortana needs to mature a bit more, but to have a speech interface on my computer is a real boon to my productivity.

And lest I forget about a HUGE feature, the weight of my Surface Pro 4, or should I say lack thereof is so nice. It’s hard to believe that I’m carrying a full-blown computing device that weighs as little as a tablet.

This brings me to the question I asked in the title of this article: How Did Apple Miss This? It’s clear that Microsoft saw that there was an opportunity to converge the notebook and the tablet and even features found on smart phones. Admittedly, that process of convergence wasn’t as smooth as it could have been, according to many reviews I read leading up to my purchase. But for Microsoft – who has traditionally been likened to a sloth with respect to innovation – to have executed on this innovation ahead of Apple is simply amazing.

It does seem apparent that the absence of Steve Jobs may have a lot to do with this lack of innovation agility. BUT, one would think that Jony Ive would’ve seen this coming. Maybe he did… who knows? But the plain fact of the matter is that if and when Apple does come out with a new notebook that has features comparable to the Surface Pro, for the first time in decades, they will be the one who’s late to the party.

In a way, I feel that it kind of serves them right. Since the passing of Steve Jobs, Apple’s well-known arrogance with its history of innovation has seemed to make it rest on its laurels, and let’s face it: Apple hasn’t come out with anything groundbreaking in the last few years. Though they’ve continued to produce new models of the iPhone, iPad, and the upcoming Macbook, and have made improvements with OSX, these changes have been much more evolutionary and frankly, pedestrian.

So kudos to Microsoft for recognizing the opportunity to converge computing platforms, and more importantly, executing on it in such an elegant way!

One of the most frustrating things I ran across when I first started using Backbone.js was the way in which models updates were sent to the server. The documentation includes the following:

“The attributes hash (as in set) should contain the attributes you’d like to change — keys that aren’t mentioned won’t be altered — but, a complete representation of the resource will be sent to the server.”

When I read that, my heart sank because POST or PUT operations to our API only allowed certain fields to be passed as many attributes were immutable. I discovered that I wasn’t alone in this particular quandary as a search on doing selective updates with Backbone.js revealed many people wanting to know how to do the same thing.

I tried several things, but most had to do with changing the Backbone.js source code; something that I really didn’t want to do. But taking a deep-dive into Backbone’s source, I discovered a couple of interesting blocks of code in the Backbone.sync object, that ignited an idea. The first block was simply this:

if (!options.data && model && (method == 'create' || method == 'update')) {
  params.contentType = 'application/json';
  params.data = JSON.stringify(model.toJSON());
}

Basically the conditional checks for the existence of options.data. If it existed, the code within the conditional would not execute; meaning that the model would not be copied to params.data. Then I got really excited when I saw the last line of code in Backbone.sync:

    return $.ajax(_.extend(params, options));

In that line, options is mixed into params! That meant that I could define my options.data outside of Backbone.sync and pass in only the fields that I wanted to post!

I won’t go into all the niggling details behind coming up with a solution, but suffice it to say that I found that the best thing to do was to make a descendant model of Backbone.Model and override the save method. The following override method will allow you to save only the fields you want to save. It will also handle the case where you do a model.set({fields}) then just all a raw model.save(), as the parent save is invoked via a call at the end of the method. Here’s the override:

    save : function(key, value, options) {

        var attributes={}, opts={};

        //Need to use the same conditional that Backbone is using
        //in its default save so that attributes and options
        //are properly passed on to the prototype
        if (_.isObject(key) || key == null) {
            attributes = key;
            opts = value;
        } else {
            attributes = {};
            attributes[key] = value;
            opts = options;
        }

        //Since backbone will post all the fields at once, we
        //need a way to post only the fields we want. So we can do this
        //by passing in a JSON in the "key" position of the args. This will
        //be assigned to opts.data. Backbone.sync will evaluate options.data
        //and if it exists will use it instead of the entire JSON.
        if (opts && attributes) {
            opts.data = JSON.stringify(attributes);
            opts.contentType = "application/json";
        }

        //Finally, make a call to the default save now that we've
        //got all the details worked out.
        return Backbone.Model.prototype.save.call(this, attributes, opts);
    }

The beauty of this is that it doesn’t require the alteration of any of the Backbone.js source code.

Whenever I get involved in conversations revolving around classical inheritance in JavaScript, I usually have a couple of comments:

  1. Geez! Man up and learn the f-in’ language!
  2. JavaScript is a class-less language, why would you want to do classical inheritance?!

Comment 2 is usually followed by a variant of Comment 1.

Earlier this year, I wrote a short quip on not buying into Parasitic Inheritance trend. Prior to landing my current gig, I had interviewed with several companies that were employing it in their applications. The things they were doing were very cool, but the reason almost all stated for using it was that they didn’t have to use “new.” I brought up the issue that in order to create a descendant, you had to make an entire copy of the ancestor, then tack on the new methods; whereas natively, it was a simple graft of new methods onto the ancestor. It fell on deaf ears as they justified what they were doing by citing Crockford’s work on classical inheritance in JavaScript.

But I don’t think they read the very end of the article in which he states – in a bordered box no less – that his attempts to support classical inheritance in JavaScript was a mistake. Here’s the text:

I have been writing JavaScript for 8 years now, and I have never once found need to use an uber function. The super idea is fairly important in the classical pattern, but it appears to be unnecessary in the prototypal and functional patterns. I now see my early attempts to support the classical model in JavaScript as a mistake.

The challenge with JavaScript is that it is a wide open language; you can do just about anything with it. But just because you can do something doesn’t mean that you should…

Yeah, I’ve heard all the complaints over the years – coming especially from former Java programmers. But I just keep on going back to the two comments I made above, especially the first comment. My belief is that if you’re programming in a particular language, learn the f-in’ language. Don’t try to place a different model or pattern on the language just because you’re used to programming in another language. To me, doing that is the ultimate in bullshit hubris. JavaScript is a powerful language, though admittedly it does have its shortcomings, but if you take the time to learn the mechanics of the language, there’s not much you can’t do with it.

I will admit that I don’t have enough natural talent at coding that I can just sit down and code an entire application off the top of my head. So I use UML (and in the past the Booch method) to engineer my applications before I write one line of code. Design has always helped me get most of my issues worked out before I code so that when I’m ready develop, it’s all about execution; rarely do I have to stop to second-guess what I’m doing because I’ve already worked it all out in my head, and more importantly, created a map for myself to help guide my development. This practice is something I just learned on my own after failing so many times. Call me a little anal-retentive about going through this process, but I’ve had nothing but success developing applications in this fashion.

It used to bother me that for the most part, I’d be the only one designing my software before I actually built it. Hell! Everyone around me took their specs or mockups, sat down and churned out code! It used to make me uncomfortable. But it no longer makes me uncomfortable because those very same people are the ones who spend lots of time paying off technical debt. Now I think that if they just invested even a little time working out their design before they code, they wouldn’t have to spend so much time grinding their apps or components into submission. But that’s how it is with most of the software development world.

Getting to the crux of this article, almost every engineer I’ve spoken with regarding the virtues of design agrees that it’s valuable – and I’ve spoken hundreds on this subject. But only a handful have actually adopted the process of designing before you build. That doesn’t hurt my feelings though, because I figure if I can reach just a few, they’ll hopefully teach others. And those of whom I have taken under my wing to teach the process have gone on to be some of the finest software engineers in the industry, and sought after by many companies. The software they produce is bullet-proof, and it all started with design.

Despite the agreement that software design is a valuable step in the development process, overall, I’ve that that agreement ultimately is lip-service. I think many developers see it as a time sink that will take away valuable development time. I can’t tell you how many times I’ve heard something like this: “I have a deadline and can’t take the time to learn this.” But I always contend that if you start simple, the time investment is minimal. For instance, I always instruct people to start with a simple class diagram. That way they can identify the objects they’ll have to build. Doing that one thing can solve so many issues due to not knowing who the actors in your play are. Then, if they’re ambitious they can move into drawing sequence diagrams for the critical object interactions. And for the most part, unless you need use-case or state diagrams, you really only need class and sequence diagrams. In the end, it’s not much of an investment in time.

Admittedly, as with any new thing to learn, velocity will be slower. But as you get better at it, you’ll be faster. And I have found, as have all who have adopted this practice, that not only does design get faster, but development gets faster, and even more importantly, the time spent paying off technical debt is greatly reduced. For instance, I know of a guy who has worked and re-worked the same damn component for 6 months! If he had only taken the time to sit down and work out a design, he could’ve been done in two weeks – maybe even sooner. But he’d release his work, find that it was missing some key features, then rework his code. Talk about wasting time!

I think what shocks a lot developers I speak with is when I tell them the proportion of time I spend on various tasks when I’m developing. I spend about 5% on requirements review, 80% on design, 10% on development, and 5% on debugging and technical debt. Those are fairly loose averages, but I’ve measured these over time. Before I became a practitioner of design, those numbers were more like: 5% on requirements review, 10% on design, 50% on development, and 35% on debugging and paying off technical debt. What’s the implication here? Look at the debugging and technical debt numbers. With a good design, you just don’t make that many mistakes. Most of my bug tickets tend to be cosmetic fixes in nature. I don’t get too many functional error tickets; again, that’s due to having done a thorough design. Also, with the change in proportion, my overall development time, including all the steps has been reduced by 30-40%. What used to take me several days to finish, now only takes me a day or two or even a few hours! But despite sharing those numbers, and people getting fired up about doing design, most simply don’t execute. It’s really a shame.

Eventually people will learn. But in the meantime, there’s going to be a lot of crappy code out there…

Seems to me that there are lots of developers out there who have taken up the MVC banner, and charged forth into battle; silently crying out, “MVC! MVC! All apps should be MVC!” Or maybe it’s just me that did that…:) Regardless, over the last few years, lots of people have adopted the MVC design pattern as their pattern of choice. But from what code I’ve seen, especially in the JavaScript world, most of it plain shit as far as following the pattern is concerned.

The most egregious guffaws are confusing object roles in an MVC system. I don’t know how many times I’ve seen objects that cross the Controller-Model or Model-View or View-Controller boundaries; intermingling roles within a single object. Other things I’ve seen are people defining objects to fill a particular role, then do all communication via two-way direct reference between objects. Or worse yet, I recently saw some code where the developer was having a controller make his view trigger an event to which the same view was the only listener! Yikes!

One could just call it stupidity on the part of the developers, but I’ll be more forgiving and say that all that boils down to simple ignorance of how the objects – and more importantly, how objects communicate in any MVC system – should work; or just plain ignorance of what MVC is all about.

The Model-View-Controller paradigm at its most pure is quite simple: It’s all about separation of concerns, and consigning “concerns” to three, specific classes of objects – Model, View and Controller objects. Each class has a specific purpose and role. In this installment, I’m going to talk about each of these classes, and how they should be used to build successful MVC applications.

Model

A Model is an application’s representation of the underlying data set; in other words, it’s data. It has accessor methods in the form of getters and setters to manipulate the data. It does not or should not contain business logic. Unfortunately, some 3rd-party libraries such as Backbone.js muddy the waters a bit by adding things like the “validate” method to allow data validation within the context of the model. I’ve always found this to be just plain wrong. While you certainly can put logic in the model by virtue of JavaScript not having any barriers to doing so, muddying the waters in the model’s role in this respect I believe does more harm than good.

To me validation is really the job of the controller, which should be the one that “owns” the business logic. However, I’m also in favor of placing validation logic in the view to encapsulate it and remove some of the burden from the controller, since validation is really view-specific.

A model’s role is simply to store runtime data. Consumers of the model can get or set attributes on the model. But when that data changes, the model is only responsible to notify listeners that it has changed. To me, that’s it.

View

To me, there are two types of Views: Smart Views and Dumb Views. Smart Views have a bit of logic in them, namely input and output validation logic to make them “smart” per se, but as the endpoint objects – that is, the objects that clients actually “see” – they should never contain core business logic. Actually Smart Views are simply Dumb Views with some validation. Some have argued that a Smart View could also be one that includes Model functionality, but I don’t subscribe to that at all. I think object roles should be distinct, and Views are responsible for presentation of data contained in their associated model. Period. As for “Dumb” views. They simply exist to display model data (and update the model if they’re used for input), and update themselves when the model changes. Pretty straight-forward.

But with any view, I found that following the following rule-of-thumb has saved me countless hours of anguish, and that is simply: A View knows about its DOM and its DOM only. It knows about no other View’s DOM. This is extremely important to consider, especially if you’re using Backbone.js whose views are normally attached to existing HTML elements, and not blitting out their own HTML. When you create a Backbone view and assign its “el,” you have to make sure that no other view – as you can potentially have several views on a page – can manipulate DOM represented by that “el.”

Admittedly, when I was new to Backbone, coming from a system I helped create whose views all contained their own HTML, I broke this rule because hey! jQuery lets you get the reference to any HTML element so long as it’s in the DOM tree. But I ran into a bit of trouble when I had multiple views on a page accessing the same region of the page and using the same el. It was a nightmare to try to maintain. I no longer do that, as I’ve learned that lesson, but mark my words: Your view should only be responsible for a distinct “el.”

Controller

Especially in JavaScript, a Controller is almost superfluous, as it is mostly used as an instantiation device for models and views. But it can carry “safe” business logic; that is, business logic specific to the operation of the application that doesn’t expose trade secrets. It is also used to control application flow. Circling back to Backbone.js, in earlier versions of Backbone, the “router” object was called controller, but was later renamed to router. I think this was a smart move because a router is simply a history manager wrapper. It doesn’t control application flow. That’s the controller’s job.

For instance, I’m currently building an application  where I have a master controller that instantiates several sub-controllers representing sub-modules of the application. Based upon user input (a click of a button or a link on a page), the controller decides that module to instantiate and display. It then tells the router to update the route to reflect the “destination” in the browser’s URL. In this respect, I’m using the router simply as a view, as all it does is update the URL, or on the reverse, tells the controller to display the right module or section depending upon what was entered in the URL.

In other words, a controller is responsible for controlling application flow. That’s all it does. It can listen and respond to events in the model and views. It can even trigger events to other controllers. But in no way should it contain data manipulation or view functionality. Leave that to the models and views of the system.

Look, MVC is not rocket science. But it may feel as if it is especially if you’re doing it wrong. And believe me, lots of people are doing it wrong.

I made this chart based upon data that was cited in a San Jose Mercury News article that I read in today’s Business section. When I read those numbers, it made me think, “What would this look like on a chart?” Reading it in print is one thing, but seeing it on a chart tells a pretty grim story. So what does this chart tell us? That as Netflix stock was plummeting, Reed Hastings’ stock option grants soared. Note that these awards were given each month listed in the chart. Rank and file employees get annual options grants, by the way.

I know that this chart paints a pretty bad picture. But if you add another data point to the graph, Value, then each grant would be worth approximately $1.25 Million, so obviously this was a pre-arranged package. So much for Hastings’ compensation to be tied to performance. Oh yeah, the compensation board will argue that the package will be reviewed yearly, and in Netflix’ case, they’ve indicated that Hastings’ comp package will be adjusted for this coming year based upon the performance of last year. That’s such a crock. The guy made some seriously bad moves last year which resulted in Netflix’ stock plummeting to almost a quarter of its value in just two months. Yet his compensation package guaranteed him a $1.25 Million monthly bonus irrespective of the performance of the company.

I wonder what kind of message that sends to Netflix employees?

Ever wonder why uprisings such as “Occupy” happen? You just have to read stories or hear accounts like the Hastings’ compensation package to find your answer. Look, I can totally buy into the leader of a company getting paid more than the rank and file, and in great times, I can see them getting handed handsome packages for their leadership. But what I can’t abide by is a situation like this where an executive’s compensation package is not adjusted immediately upon a loss in revenue or reduction in membership.

My thought is that if you’re going to tie compensation to performance, then at the very least, review performance with a high periodicity than just annually. And set up thresholds. For instance, at a certain net income level, an executive would get X in cash compensation and option grants. If there’s growth, then they would get more. But if net income falls or in the case of Netflix, there’s an exodus of customer base, then the compensation package would be adjusted down.

Makes sense, but the reality of the situation is that practices like this won’t be changing any time soon.

I once got this “emergency” project where I had three weeks to deliver a mobile prototype application that was to be demonstrated at a major user conference. I spent the first week creating a UML design for the app – also looking for a back-end guy to build the Java API’s for me to call. Then spent a few days prototyping some assumptions and testing our JavaScript library’s capabilities on various phone-based browsers. Once I proved that out, I had roughly 7 business days – plus a weekend – to deliver the project.

Five days and almost 600 lines of code into implementation, I realized that I was doing a boatload of coding; way too much, writing lots of code that to address things that I hadn’t considered in my design. So I stopped coding, opened up my design, ran through the sequence diagrams and realized that what would’ve helped was having an intermediary object act as a ViewController and manage the various view objects. So went back to my class diagram, inserted the new object with the methods and properties that I expected it to have, re-worked my sequence diagrams then went back to my main JavaScript file and…

…completely erased it…

I mean, Select All -> Delete.

When I redid the code, I finished it with less than 50% of the original lines of code and actually checked it in with a day to spare. During testing, only cosmetic flaws were found, but no functional errors. I fixed those flaws and the prototype was released and demoed live at the conference in front of over 1000 people. The point to all this is that once I had the right design, the coding was absolutely simple and straight-forward. I wasn’t writing any compensatory code or dealing with deficiencies in my design because the design was right.

Moreover, erasing all my original work ensured that I wasn’t influenced by my old code. I had to start with a clean slate. But in the end, I still beat my deadline by a day.

Now, this isn’t something I recommend for huge projects, but as a rule of thumb, if you find that you’re writing a lot of code – especially with object-oriented JavaScript – chances are your design is flawed. At that point, stop, re-evaluate your design, and back up to a place in your code where you can adapt to the better design. Yes, sometimes that means getting rid of all of it, but most of the time, you can back up to a reasonable place without erasing all your code. But in either case, don’t be afraid to scrap code; especially if it means that the final product will be superior to what you originally created.

In addition to writing this blog, I also write a fairly popular blog called GuitarGear. To write that blog, I interact a lot on various forums, and meet other guitarists. Especially in the forums, there’s lots of debate about which pedal or amp or guitar – what have you – works in a particular situation. Some of the threads go on for several pages. Invariably though, someone will pipe in and say something to the effect of, “Just shut up and play.”

Sometimes I feel like saying this when I’m in meetings with fellow geeks and a side debate starts on some topic; especially on the merits of one technical direction vs. another. In the best of cases, these discussions/debates yield a good direction; actually, in the end, they almost always yield a direction. But invariably, that direction could’ve been arrived at in a much shorter span of time. I think that part of the problem, especially with well-established teams is that everyone’s comfortable with each other. That’s both a good thing and a bad thing. It’s good that you can rely on your work mates, but that comfort can dangerously lead to losing “the edge” of urgency to deliver as you belabor points. The plain fact of the matter is that we have product to build; we shouldn’t be wasting time on trivialities. So shut up and develop!

A friend at work often chides me in saying that despite the fact that I’m a Republican, how I speak about politics makes me a Democrat. I’m just as Republican now as when I first registered to vote when I was 18, and I still hold to the traditional Republican values of small government, individual freedom, and conservative – as in judicious, not political – financial responsibility. My friend teases me because I have a much more moderate position with respect to my politics, which focuses on the issues and not the ideology, so I suppose it must seem to him that since I don’t speak politics like 95% of the Republicans out there, I must be Democrat. He’d probably say the same thing about Maine Senator Olympia Snowe, who is one of the few moderate Republicans in Congress today, and who unfortunately is not running for re-election.

I read an article about her frustration with American politics today this morning, and despite the article’s title of “Frustrated Senator Olympia Snowe Give Obama an ‘F,'” the actual meat of the article focused on her general frustration with Congress. Here’s a quick excerpt:

“I think a lot of the frustration frankly in our party, in the Tea Party challenges or even Occupy Wall Street is really a reflection of our failure to solve the major problems in our country,” said Snowe. “It’s become all about the politics, and not the policy. It’s not about governing, it’s about the next election.”

So has this Congress failed the country on those critical questions?

“Absolutely,” Snowe asserted.  “You have to sit down and talk to people with whom you disagree,” said Snowe. ” And that is not what is transpiring at a time when we desperately need that type of leadership.”

What she said above mirrors EXACTLY what I’ve been talking about with others when discussing politics. Especially with my ultra-conservative friends, I’m often apt to say before going into a political discussion, “I’ll only engage in this discussion if we talk about the issue, not about the ideology. If you want to bitch about Obama did this or Obama didn’t do that, then let’s talk about how the Sharks are doing instead. Whether you like the guy or not, we have real problems in this country, and discussing political ideology is NOT going to solve them.” We usually end up talking about the Sharks…

I read an article today that was published in yesterday’s San Jose Mercury News Business Section written by columnist Chris O’Brien entitled, “Key Job Sector Losing Ground,” describing how growth in science and engineering jobs over the past decade has remained flat relative to previous decades, and kind of being a doomsayer in that that flatness may have an effect on innovation. He does quote a researcher that said that perhaps that flat growth means a lack of demand for science and engineering jobs. Being in software engineering, I would tend to agree with that assessment. But I disagree that that flatness may lead to the possible constriction of innovation.

I think that the flatness is actually a correction of the excesses of the dot-bomb era. Even in 2007, there was a minor uptick in the technology sector, and several companies, including my former company, SuccessFactors, added LOTS of engineers in a very short period of time. Unfortunately, during a boom period, especially in technology, the focus tends to be on putting “butts in seats” quickly as opposed to getting the right butts in the right seats. I saw that at SuccessFactors, where we added lots of really mediocre engineers to our software development team. Most of these “engineers” were the typical, “code-first-think-later” code-monkey types. As a result, in 2008 when the economy soured, the company had to shed that excess and frankly, unneeded baggage.

I’m probably sounding a bit crass and elitist, but honestly, I truly believe that what’s happening with the technology job growth, especially here in Silicon Valley has more to do with companies being careful about choosing the right people to fill their employment needs, and choosing only those whom they feel will take them to the next level.

People talk about jobs being shifted off-shore. To me, it’s natural that they’d go there. Think about the jobs being shifted off-shore. I don’t think I’d be too far off the mark in saying that those are jobs that tend to be more maintenance and production-level types of jobs. The real innovation stays here. Even with my previous company SuccessFactors, despite senior management constantly saying that our engineering group was “global,” and always tried to blur the lines between domestic and offshore development, in reality, all the innovative work took place here in the States; and even new product development offshore followed the models and innovation established domestically. Plus their designs were always subject to approval from the US-based team. So in consideration of that, to me, this “flatness” is not really flatness. I believe it’s shedding the production and maintenance workers, and distilling down to a workforce of innovators here in Silicon Valley.

Call me insensitive, but as opposed to Mr. O’Brien, I’m in the industry, and have experienced the growth and decline of job number from behind the lines. Yes, I realize that I’m opining, but it’s not uneducated and not without experience in the sector.

Follow

Get every new post delivered to your Inbox.