More sleeping with the enemy. Next up: Android

March 12th, 2011

As it’s now all about the money, it makes sense to look into developing for the only other viable modern smartphone OS, so I got an e-book on Android development and set out to port two of my simpler iPhone apps to it.

Much like the iPhone originals, rewriting them for Android didn’t take long, and gave me a good chance to assess Android development and compare it to the experience of iPhone development.

What did I find?

Well, first off, I almost hate to say this, because I know that one gets used to what they know, and something unfamiliar can often be perceived as frustrating, poorly designed, garbage, etc…

That said… I really, really am not a fan of Eclipse, which is the IDE that the official Android development pages tout as the de-fact Android development IDE.

When the T-Mobile G1 first came out, we dabbled a bit in Android development.

It was hard to get past the fact that there was no visual layout tool for Android that was even in the same ballpark as Interface Builder for iPhone.

It’s been a couple of years. I figured that they MUST have improved the layout tool by now.

Ummm. NO.

Sorry, but it’s absolutely terrible.

On top of that, I think the Android emulator leaves a lot to be desired compared to the iPhone/iPad simulator.

But I could live with those. I actually, at times, found myself enjoying Android development.

Except for one little thing.

Almost every time I wanted to figure out how to do something with my UI, and I searched online to find the answer, most, if not all of the answers started with something to the effect of: “in the layout xml, you…”

I’m not a fan of Java’s layouts. I’m also not a fan of performing tasks in xml. Lacking a good Interface Builder equivalent, I do most of my UI creation in code. Therefore I need answers for how to accomplish things IN CODE. I’m amazed at how hard it can be to find such answers for Android.

But the real problem is, it’s not just that the answers are hard to find, sometimes they don’t exist. Android is a very nice mobile OS, with some very nice features. However, the SDK seems kind of half-baked, because a lot of the features aren’t available programmatically, or at least are unnecessarily hard to get to programmatically.

Here’s one example:

I have a button. I have two graphics files for that button. One for the up state and one for the down state.

I expect that for any SDK for any OS that I’m using, I can find a call such that I can set those up and down images in a couple of lines of code like this:


setImageForState (img, BUTTON_STATE_UP);

setImageForState (img, BUTTON_STATE_DOWN);



And sure enough, on the iPhone, we have:


[button setImage:upImage for State:UIControlStateNormal];

[button setImage:downImage for State:UIControlStateSelected];


So how do we accomplish the same thing in an Android app?

Would you believe:


button.setOnTouchListener(new View.OnTouchListener() {

public boolean onTouch(View v, MotionEvent event) {





return false;




No, I’m not kidding, and it took a while to find this solution, most answers just said, “in the layout xml…”

Sorry, but that’s ridiculous.

So the bottom line is, Android is a perfectly fine mobile operating system, and it may one day be a pleasure to write for if they actually develop a decent layout design tool for it and give developers simple programmatic access to all of the functionality available through layout xml.


Giftory for Windows Version 1.0 Post-mortem

December 6th, 2010

Now that the Windows version of Giftory sits as a Release Candidate, with its official release imminent, I want to take a few minutes to look back on the last 12 months’ development experience with Windows, C# and .net 3.5.

Right to the point: C# is a very decent programming language and .net is a very decent framework to develop with.

Entity Framework turned out to mostly be a pleasure to work with and greatly contributed to my being able to get this project done this year.

I was greatly surprised at how much less code the Windows version has needed vs. what I had to write for  the Mac version.

Years ago I inherited a Visual Basic 6 project. After working on it for a while, I thought, “this visual development stuff is nice for small projects like this PowerPoint add-in that I’m working on, but I wouldn’t want to try to use it to develop a full-fledged application.”

I was wrong.

Visual development works quite well for full-fledged applications.

For a person, like me, who hates Microsoft and everything they have to do with with such a passion, it’s surprisingly easy to give them the credit that they’re due for all of this.

But it isn’t ALL rosy in Windows Development World. There are still some great(er) things about developing on the Mac, in Cocoa.

Now, let me lay out in more detail the pros and cons I found for each.

This isn’t going to be a REAL post-mortem. Maybe I’ll do one of those later on.

The things that are better in developing on/for Mac OS X:

  • Core Data has built-in undo/redo support. Entity Framework doesn’t. I wasn’t even gonna think about trying to implement this support manually on .net, so the Windows version doesn’t “do” undo/redo.
  • Table Views: When I first started developing in Cocoa, I wasn’t a fan of Apple’s implementation of tables. “I want to put value ‘a’ in the table at position x,y. Why can’t I just do that?!?” But after getting used to the hows and whys of Cocoa table views, I’ve really come to appreciate them. It gets a little more complicated when you want to do special things, like putting popup boxes or other UI controls in a table cell, but it’s still pretty well supported. And, most importantly, binding Core Data values to table views works pretty well. Windows also has a pretty decent implementation of table views. And they certainly offer more customizability in their tables by way of many more events on their tables that can be intercepted. But greater flexibility isn’t always better. Multiple times I found myself having to experiment with which event was more appropriate to put code into for the given situation. And sometimes it seemed like, despite ALL of those events, there was something about each one that was problematic for what I was trying to do, or none of them had my objects in the needed state when they fired. Especially when it came to Entity Framework’s integration with tables, I spent a lot of time, more time than I should have had to, trying to get things to work right, mostly when it came to putting popup boxes or other UI controls in a table cell.
  • Sheets: Well-designed Mac apps just look better than well-designed Windows apps. One reason: sheets. If you’re going to have a modal dialog anyway, it looks a lot better if the “window” is attached to its parent window and just slides down out of the title bar than putting a whole new, floating, movable modal dialog window on the screen.
  • One has never needed to know much about databases to set up a Core Data model. Model your entities and their relationships, and Xcode/Core Data handles most of the setup. This may be similar to Entity Framework in Visual Studio 2010, but not so with Entity Framework in Visual Studio 2008. I don’t DO databases. I shouldn’t have to know much, if anything, about databases or how to set them up. But, to get Entity Framework working, I had to divert my development efforts for a month or so to learn about the various types of databases available on Windows, which one was best for my purposes, how to set it up, how to modify it after integrating it with Entity Framework, etc… That sucked.
  • Built-in UI controls make apps easier to develop, better looking and easier/more pleasant to use. There’s no built-in UI element in .net to denote ongoing processing in which you can’t map it to a percentage complete? Really?!? My app is contacting a server, making a request and waiting for/receiving a response. There’s no built-in UI control to let the user know this is all going on so they can sit tight without wondering whether or not anything’s happening or if the app has locked up? Would seem like a no-brainer to have in your toolbox.
  • Exceptions: try/catch blocks are ugly. Honestly, I hate seeing them in my code. I only put them in when absolutely necessary. Frankly, they should almost NEVER be absolutely necessary. It is perfectly acceptable to have a call return an error code if it fails (or populate an error object) instead of throwing an exception. Giftory for OS X doesn’t have a single try/catch block in the code. Giftory for Windows has several.
  • Packaging apps in OS X is a thing of beauty. Many (most?) apps don’t need to be anything other than a single executable file. Okay, as developers we know OS X apps are actually folders, but to users they appear to be a single file. Windows would do well to copy this. Having to have a folder – a folder that can be seen as such by users – with files that, if they accidentally move/delete them could ruin the whole app, openly available on the user’s system is just plain bad. Open the “Applications” folder on a Mac. You see a bunch of pretty application icons that you can double-click to launch. Open up the “Program Files” folder on a Windows box. You see a bunch of folders. Ugly. Just. Plain. Ugly.
  • Core Data was introduced in OS X version 10.4. If I want to use Core Data, all I have to know is that the user is running OS X 10.4 or later. Apple does a wonderful job of integrating technologies into the operating system and making it easy for developers to utilize technologies with a minimum amount of effort. Microsoft could learn a HUGE lesson from this. I just spent the last week+ trying to figure out how to deploy an app to end users that is written in .net 3.5, uses Entity Framework, and uses SQL Server Compact Edition. Thankfully, LOTS of other developers have had the same problems I did, so there were always answers to my various questions available online. Unfortunately, often the “answers” were more like suggestions for things that worked for one person but not for another. Nevertheless, utilizing and deploying technologies that are provided by the OS vendor should not be this difficult.

The things that are better in developing on/for Windows:

  • Sometimes when developing for OS X, you want to know when a certain “event” happens so that you can base an action off of it. You then search the documentation to see if said “event” calls a method in a delegate, or triggers a notification. Often it does, but sometimes it doesn’t. In .net, however, you can almost guarantee that your event is already defined in the framework and you can write code against it. Often, there’s two or three similar events that you can use to accomplish the same task. Yes, that may seem to contradict something I said earlier, and it can be unnecessary to have too many events, but it’s still better to have too many than to have an occasion where you need an event but it isn’t there.
  • Data passing between objects: Cocoa has something called an application delegate that I use as sort of a repository for global variables, values that might be needed by disparate parts of my application. I didn’t find that .net had a similar methodology, but, surprisingly, I found that I didn’t need it. Data passing between objects somehow seemed to be either much easier, or mostly unnecessary. I’m still not sure how that worked out, but it did.
  • Creating custom views/controls is a much more pleasant experience in .net than in Cocoa. That’s possibly due to the integration of the form designer into the code editor, something which the next release of Xcode will also have, but it doesn’t yet. So, in .net, no dealing with xib files and no init’ing new controller objects with those xibs.
  • Even though I think Cocoa’s usage of sheets is much better than .net’s usage of dialogs, creating and using dialogs in .net is much easier than creating and using sheets or child windows in Cocoa. The integration of parent windows with their child windows is fantastic.
  • Customizability: This should come as no surprise. Apple is known for “forcing” adherence to their way of doing things. Microsoft is known for giving huge amounts of flexibility in the way things can be done. .net offers a lot more ways to do things the way you want them or to make controls work the way you want them to.
  • I don’t know much about databases, and I don’t want to have to. But I do know a little, and that little bit of knowledge went a long way in dealing with Entity Framework. Querying in Entity Framework turned out to be quite a bit easier than fetching results with Core Data. With Entity Framework you can write a simple query. With Core Data you have to build components that you use to construct a fetch request.
  • To this day I don’t know what most of the options for data bindings mean in the Interface Builder data bindings section. Data binding in .net, on the other hand, is much simpler. Maybe Cocoa’s implementation has more flexibility and more power, but for what I was doing I generally didn’t need it. I could bind controls to Entity Framework entities or query results with a couple of  lines of very simple code.
  • As stated above, Giftory for Windows version 1.0 has most of the same features as Giftory for Mac OS X version 3.5, yet there’s a lot less code in the .net project than there is in the Cocoa project.

I would note that I still spent my fair share of time lamenting that “I hate Windows” because of some thing or another that didn’t work the way I thought it should, whereas, in Cocoa, things usually worked the way I thought they should, even if I had to write more code to get to the finished product.

I guess this may not turn out to be my only Windows product in the long run, but I still think I can safely say that I enjoy working in Cocoa enough that a Cocoa version will always come first and always be my favorite.

Shhh!!! Don’t tell anybody!

December 5th, 2010

Giftory 3.5 is out! But nobody knows yet.

I started working on Giftory version… 4.0… I think… a year and a half ago. Didn’t quite get it done in time for Christmas 2009, so I shelved it and started working on Giftory for Windows.

As the year started winding down, I devoted some time back to the Mac version, trimmed down the new feature set, and came up with version 3.5.

Finally got that version done and tested a couple of weeks back.

Then I had to sidetrack things again to get a version built to put on Apple’s Mac App Store.

Finally, last week, I got it packaged up and put on the website.

So it’s been publicly available for a week now… but I haven’t “officially” announced it yet, because that’s waiting for the Windows version to be ready to be announced.

But if all goes well, everything will happen in the next couple of days.

Wow, some people really, REALLY like Microsoft and Windows!

August 29th, 2010

So, over the last two months I changed jobs. Now I work for a shop that does consulting… primarily Windows-based consulting.

I always knew that there were people out there who really did like Microsoft and their technologies. But W-O-W! Being here, I’m absolutely amazed at the degree of Microsoft adoration! I’ll admit that it does turn my stomach just a bit, but I have come to find that, as JavaEE, Apple, Google, Open Office, Facebook and others have become more relevant, Microsoft has become less relevant, and thus I don’t thoroughly despise them as much as I used to.

Besides, this is a really good group of people that I’m working with now. And even better, I get to be an insurgent having a Macbook Pro on the company network and making iOS development contribute more and more to the company bottom line.

Perhaps my selling out is nearing a point of no return…


Developing applications rapidly: Cocoa vs. .net

August 29th, 2010

So I’ve spent some time now working on Giftory for Windows. Not nearly as much time as I should have… but some time nevertheless.

I must admit, C# is a pretty nice language and .net is a pretty nice framework. It helps that I did some work in VB6 a few years back, so the whole visual development thing is familiar to me.

So the first big issue I dealt with in developing the Windows version was creating the data and linking it to UI elements.

On the Mac, we’ve had Core Data and bindings for years. I figured Windows MUST have something equivalent.

Spent some time looking… bindings was easy enough… but Core Data… not so much.

Eventually I came to understand that the closest thing Windows has to Core Data is the Entity Framework. Now Entity Framework isn’t so bad… except that, unlike Core Data, Entity Framework is intimately tied into using a database. Core Data, on the other hand, is storage mechanism agnostic.

I don’t DO databases.

With Core Data, you design your data objects and their relationships, and Core Data handles the setup for you. While I read that the 2010 version of Visual Studio will do just that, 2010 was still in beta when I started this project. Honestly I can’t believe that Microsoft was that far behind Apple for an object-graph management and persistence framework. But apparently Steve Jobs was right when he described how revolutionary it was when he first unveiled it years ago.

Now that I’ve spent some time in Entity Framework, I would say it’s a pretty decent system. It was harder than Core Data to just pick up and run with without reading a ton of documentation first, primarily because it required way too much of an understanding of databases. But once your database is set up, it’s pretty usable, and integrates very well with data binding.

I remember with Core Data I just read a little bit, started experimenting, and even though I didn’t fully understand what I was dealing with, when things SEEMED to work, I banged on them a little more to be comfortable that I had it right… and ran with it. With Entity Framework I seem to have more confidence that things are going to work as expected and the data integrity is safe. Not sure why, though.

Too bad Visual Studio 2010 wasn’t available a year ago. Would have saved me a lot of time and trouble.

So now that I’m considerably behind schedule for getting the Windows version done well in time for this holiday season, we’ll see just how rapid this rapid application development system is.

I will say this, getting Giftory for Windows to the same state as Giftory for Mac version 1.0 has taken considerably less code in Windows than it did on the Mac. So maybe I’ll actually make this release afterall!

What caused a Mac OS X Cocoa developer to defect to Windows and .net?

April 16th, 2010


Ah, I can hear it now, the relentless barbs from people who have known me since the 8 bit days… people whom I’ve leveled the same accusation against many times throughout the years.

I’ve been a non-mainstream, anti-wintel, pro-Motorola CPU fanatic who swore never to write a line of C# code in my life, ever since the days when you could write endless counting loop programs in BASIC on the computers on display on the electronics counter at K-Mart. (Except for the C# part. That didn’t happen till a few years later.)

So what trauma could cause me to betray a whole lifetime of pure, unadulterated Microsoft hatred?

Okay, so I didn’t really defect.

I thought about it… seriously… for a while.

(Well, for a couple of seconds. But that’s an eternity in CPU cycles.)

But in the end, practicality won out.

So what happened? Here’s the story.

Several years back I noticed my (now ex) wife struggling with pencils and erasers and several pages in a notebook with scribbled lists. It was late December and she was trying to make sure that our three kids all had the same number of Christmas gifts bought for them, the same number of relatively big, medium and small gifts, that each had about the same amount of money spent on them, and that each had certain “special” gifts covered, (each year they got an ornament and pajamas on Christmas Eve, for instance.)

I looked at this and said, “ummm… that’s the kind of thing computers are made for. Let me write you a program for that.”

And thus Giftory (Gift Repository) was born.

That next Christmas season I brought her over to the laptop and showed her what I had created for her, a nice little app meant to make her Christmas gift management easier.

She looked at it, played with it for a little bit, and promptly told me, “that’s nice, but I prefer using my pencil and notebook.”

<Spirit crushing moment number 1>

Nevertheless, I thought it was a useful little app, so I released it as freeware, posting it to the major Mac download sites.

Over the next year, I got more feedback from her, got a little feedback from people who had downloaded the app, threw some of my own ideas in, and released a new version during the next holiday season.

Again, she looked at it, played with it for a while, and eventually told me, “that’s a lot better, but I still prefer my pencil and notebook.”

<Spirit crushing moment number 2>

Over the next couple of years, more people downloaded it, and contacted me with suggestions. A couple of times I even implemented their suggestion immediately and sent them a custom build. I’m an old-school freeware/shareware developer from back in the small computing communities of the 80s. We did that kind of stuff back then.

Then, I got divorced. Someone who didn’t appreciate the awesome little app I wrote for her didn’t deserve me.

Just kidding. Seriously.

Nevertheless, I was on my own, with two daughters, (the third… actually the first… kid was hers,) to manage Christmas (and birthdays, and Easter…) for.

So I became the biggest user of my own program. Frankly, I thought it was fantastic!

Eventually, I decided that, while pretty good, Giftory could be much better, and if it was much better, it would be worth paying for.

So I rewrote it from scratch. (A year or so after I wrote it, Apple introduced Cocoa Bindings and later, Core Data. These technologies were a perfect for Giftory, but it didn’t make sense to retrofit the existing code base to utilize them.)

Admittedly, as Christmas 2008 approached, I finished the commercial version of Giftory a little later than I had intended. I didn’t have time to recruit a proper group of testers and put it through a full test cycle, but I did the best I could, and I did get in all of the features that I had planned. I released it on November 12th. I figured that was just enough time for it to get a little traction before Black Friday hit and the start of the Christmas shopping season began.

I wasn’t sure how to best go about promoting an independent commercial Mac app, but I thought a good place to start was a press release to all of the appropriate outlets. prMac took care of that for me. As was suggested to me, I waited a couple of weeks for the impact of the press release to start to take effect.

Two weeks in… only a couple of mentions online, and only a handful of sales.


Then Black Friday hit.

Then the calendar turned over to December.

Still… no traction.

Panic started to settle in. Gotta get more aggressive.

I started a couple of Google AdWords campaigns. I participated in an internal Apple promotion to provide a free license to all of their employees. I sent free license codes to all of the major online Mac outlets I could think of, requesting a review or at least some coverage in the few short weeks before Christmas. I wrote and released a free iPhone app to compliment the functionality of the desktop app. I offered free licenses to everyone I knew personally, including my local Cocoa developer’s group. I posted about the application to the Mac LinkedIn group.

Of all the outreach I did, only one media outlet wrote a review. And although it wasn’t exactly the most positive review I could have hoped for, I still appreciated the fact that they did one at all.

<Spirit crushing moment number 3 – begin>

At this point, you may be saying, “you came to people in the month of December trying to get a review/coverage done before Christmas, and thought it would actually happen?!?”

So let me provide a little perspective on my thought processes.


I spent the formative years of my computing life on a little 8 bit system called the TRS-80 Color Computer. Back in the 80s, when the Commodore 64 was king, and Apple and Atari computers were Lords, the Color Computer was more like a peasant. Okay, it wasn’t quite THAT bad, but we never enjoyed nearly the level of software support of those other systems. (Think: Atari Jaguar vs. Sega Saturn and Sony Playstation videogame systems of the time.) We had a few magazines, but for most of their existence they would probably best be described as “struggling”. So, in that market, there was something of a bond between CoCo (Color Computer) software companies and the CoCo press. They needed each other. If you were publishing software, the magazines wanted to talk to you, help you, SUPPORT you, even if you were a one-man shop. And as for the users, they were often just happy that you were supporting the platform by developing software for it. In all honesty, back in those days there were users who would buy your product, even when they didn’t need or really even want it, just to support you for supporting the platform.

Fast forward.

I bought my first Mac in 1997. At the time, the Mac market could reasonably be described as “struggling”. The Mac community had a similar feel to what the CoCo community had back in its latter days.

Fast forward.

Traditional magazines and newspapers are struggling. The internet is king. Media outlets struggle to gain and hold traction in cyberspace.

Put all of this together, and, in December of 2008, when I was trying to get some Mac media attention/support, I was in the 1980s Color Computer community mindset. I needed them, but they needed me, too. They wanted to hear from me, to give me coverage, to support my efforts, to help me out.

Ummm… no.

My perspective was delusional. No matter how “down” the Mac market ever was, or how rough the landscape may be for media outlets today, it comes nowhere near the situation of the Color Computer market 25+ years ago.

I can say this calmly and rationally now. At the time, I was pissed.

<Spirit crushing moment number 3 – end>

Back to the story…

Christmas Eve night.

The Christmas shopping season has ended. The primary Giftory sales season has ended. How did we do???

Hmmm… we’ve given away several times as many free licenses to Apple employees as we have sold licenses to customers.

<Spirit crushing moment number 4>

I’m sure there’s lots of different reactions people can have to their spirit being crushed. Mine was…


Screw the Mac media. Screw the Mac community. Screw people for not “getting it” when it came to the usefulness of this app. That’s it. I’m done. I won’t do another thing in independent Mac software development. I’m taking my ball (that none of the other kids want to play with anyway) and going home.

Besides, the Mac lost a lot of its luster for me the moment Apple started putting intel chips inside of them. Ever since then, my intel-based computers have all gotten names with a demonic connotation to them: Hades, Mephisto, etc…

Wait! You know what? Move to Windows! That’s what I’ll do! Those users are less picky, they bitch and moan a lot less about user interfaces that may not be perfect to them. They’re everywhere, so it’s a lot easier to target them in promoting your app. The whole Mac world can go screw off!

Stew on it.

Stew on it.

Stew on it.

Calm down a bit. (Months later)

Okay. Let’s adopt a more realistic perspective.

The review did make some valid points as to features that should be in the app. A lot of time, thought and effort did go into this. I don’t want to throw it all away. Getting exposure is paramount to becoming successful, and getting exposure isn’t easy. I still love this app, still believe in this app, and want it to the the best that it can be. And I REALLY don’t want to ditch the Mac and move to Windows.

But the days of the small computing community where “we’re all in this together” and all support each other for the sake of the platform… those days are over. And they’re not coming back.

So let’s push forward and continue developing the app. But now it’s not going to be for “the love”. It’s not going to be to support the platform. It’s not going to be to support the “community”.

If we’re going to continue with this, from now on it’s going to be about one thing:


And maybe that’s what it should have been about all along.

So, what’s the new plan of action?

Continue developing Giftory on the Mac. Improve the iPhone supplementary app. Maybe even write a version of the mobile app for Android and other mobiles.

But that’s not all. If this is all about business now, then it makes the most business sense to bring Giftory to the masses. And that means…


Yeah, that hurts. Reality bites and the truth hurts sometimes.

So I’ve bought my .net 3.5 and C# book, and started learning to develop the Microsoft way. (Well, the “new” Microsoft way, as, over the years, I’ve already done quite a bit of C++ and Visual Basic development in Visual Studio, prior to the existence of .net.)

Now the goals for 2010 include a major new release of Giftory for Mac OS X and a first release of Giftory for Windows.

And much of this blog, for the rest of this year, will be devoted to chronicling my adventures as a Cocoa developer navigating a major .net project as a .net and C# novice.

Ugh. It sickens me to admit this… but I’m actually kind of excited about it.

I feel dirty.

What makes a Software Engineer “Senior”?

April 13th, 2010

After 15 years of undergoing interviews in this business, one question always haunts me when I’m seeking a new position:

“Do I have the right to call myself a ‘Senior’ Software Engineer?”

The answer may seem obvious:

  • 15 years of professional development experience
  • Major if not sole contribution to numerous shipping applications
  • Numerous independently developed applications for multiple platforms
  • Experience leading in-house and offshore development teams
  • Multiple years of experience in C, Objective C, Java, Visual Basic and, to a lesser extent, C++
  • The title of “Senior Software Engineer” at several companies

But wait, there are good reasons why I still ask the question:

  • I don’t consider myself anywhere close to an expert in any language that I know
  • All of the code I write is written simply, at a novice level, so that anyone who has taken one class in the language can understand it. I don’t use concepts or language features introduced in the “Advanced Topics” sections of programming books. Therefore, I don’t understand those concepts/features when I encounter them in other peoples’ code and I’m fairly lost until I can get my hands on a book to brush up
  • Practically every time I see a macro defined in code I think, “what in the world is that and what does it do?!?” because I never use macros myself
  • I don’t have a particularly good memory, so I find myself referencing documentation to get the correct syntax of or method signature for language features or methods that I’ve used practically every day for years
  • I’m not particularly skilled at using any IDE. I use them for basic editing, compiling and basic debugging. 90% of what most IDEs can do I’m not familiar with and never use
  • I almost never use profiling tools or any of the other very useful development tools outside of the IDE itself
  • I’ve never been particularly diligent at error checking or handling. I rarely use exceptions and never use assertions

So I can’t give a definitive answer to my question. Maybe it boils down to the larger question:

What makes a software engineer “senior”?

I wish I had some great insight into the answer… but I don’t.

When it comes down to it, I do call myself a “senior software engineer,” for a number of reasons.

Time spent in the industry DOES count for something. It’s a reasonable assumption that the more time you’ve spent working in the field, the more problems you’ve had to solve, the more your problem solving skills have matured, and the better equipped you are to overcome future problems in a reasonable time with reasonable solutions.

In a corporate programing environment, managing the personalities of your coworkers is almost as important as managing code. Software engineers are notoriously, and legitimately, criticized for poor social skills. Surviving in the industry long enough to become “senior” likely means that you’re at least adequate at managing the personalities of people with slight social disorders.

If I’m assigned a task to accomplish, I WILL accomplish it. The code I write may not be the most efficient, or use the most relevant features of the language, but it will WORK. And perhaps more importantly, the next programmer who comes after me, even if they’re fresh out of school, will be able to understand it.

My history as a self-taught programmer, I think, makes me more likely to be able to pick up a new language or framework and run with it when the company needs it. Plus, I’m also more likely to be keeping up with new trends and technologies all on my own, because I care about more than just the project I’m working on in my 9 to 5, and often the stuff I work on on the side is more likely to be cutting edge.

So, at the end of the day, a Real Programmer, despite not being a Hotshot Superstar Software Engineer, can nevertheless make a great Senior Software Engineer.

Hopefully the next hiring manager who interviews me will agree, especially if they’ve read this. :-o

“Real Programmer” truisms, rules and maxims

April 12th, 2010

I’ve been programming for over 20 years.

I’ve been a professional software developer for 15 years.

There are certain things, in that time, that I’ve come to understand, both in terms of how things are, and how things should be.

Hotshot Superstar Software Engineers will probably disagree. But this is a blog by and for Real Programmers, so their opinion doesn’t count.

Without further ado, here’s what I “know”:

1) 90% of Real Programmers use only 10% of their IDE’s features

Over the years, IDEs have gotten more powerful, more flexible… and more complicated. Nowadays they can do everything except write the code for you. Oh wait, they can do that, too. Nevertheless, in my experience, most programmers use their IDE to do three things: edit code, compile code and debug code. Yeah, I know, that’s obvious. No, you’re not getting it. Most programmers don’t use their IDE to profile code. They don’t use it to set watchpoints. They don’t use it to attach to rogue processes. They don’t use it to create and reuse their own code snippets. They don’t use it to automatically refractor code. They don’t use editing features beyond cut/copy/paste. Many of them don’t use it to browse documentation. I even know one who still debugs using “printf” instead of using more modern debugging features. Most don’t even modify the default layout, hotkey or syntax coloring settings.

2) Real Programmers only use about a third of the features of a programming language

Object oriented programming languages are very powerful and very flexible. Most have esoteric features that let programmers accomplish great things with minimal coding. Real Programmers don’t use esoteric language features.

Also, anything listed in the “advanced topics” section of a programming book is probably a feature Real Programmers aren’t going to use and aren’t going to recognize or understand when they come across it in someone else’s code.

3) Hotshot Superstar Software Engineers use advanced features of a language just to show off the fact that they can

Nuff’ said.

4) Programmers working on projects that will have to be maintained by other programmers SHOULDN’T use advanced language features

Unless you’re working on a personal project, odds are you’ll eventually move on and someone else will inherit your code. That someone else might be a Hotshot Superstar Software Engineer who knows more about the language than you do. But odds are it’s going to be a Real Programmer. Don’t be the jerk that makes life harder on your coworkers when you really didn’t need to.

Code that will have to be maintained by other programmers should be written, as much as possible, so that a novice can understand it.

Caveat: you work at a company that’s prone to having random layoffs, (even though the company is financially stable,) or shipping your job offshore. Programming complexity may lead to job security. THAT, I’m never going to argue with. If your company isn’t concerned about protecting your future livelihood, then you don’t need to be worried about protecting the future maintainability of their products.

5) Most software engineers have never actually typed “main(int argc, char *argv[])” (or similar) outside of a classroom setting

A lot of programmers out there came up in the era of modern IDEs. When they want to start a new project, the IDE takes care of certain basic stuff for them, like coding “main()”.

Maybe more importantly, many programmers who took up programming as a career path have never written a complete app of their own, or even started a REAL app of their own. Instead, ever since college they’ve taken jobs working at established companies where all they do is maintain or extend applications that were first created before they even started high school.

6) The best Real Programmers started (and usually continue) programming as a hobby

I know a lot of programmers. If I had to staff my own software company, I would do it strictly with people who started programming as a hobby, and it eventually turned into a career for them. Going to college to study programming wouldn’t mean anything, unless you had still started as a hobbyist. Then it might be worth a bonus point or two. Maybe half a bonus point.

Hobbyist programmers program because we love it. Programmers who are in the business because they graduated high school and some aptitude test told them they would be a good software engineer and software engineering is a lucrative career, are in it for a paycheck.

7) Real Programmers who started as a hobbyist in the 80s programmed on an Apple, TRS-80 or Atari 8 bit computer

I started programming in the 80s, as did a lot of the other programmers I know. Of everyone I know who was “into computers” back then, every single one who owned a Commodore 64 or early IBM PC/PC Jr. DIDN’T end up as a programmer. Most of the people I know who started on a TRS-80 Color Computer or Apple 2 or such, ended up as a professional programmer.

8) Being a “software developer” ONLY implies that one knows how to write code

There are several different designations given to programming jobs: computer programmer, software developer, software engineer, etc… (makes job searches really irritating)

But unless there’s a “Senior” in front of the job title or a number higher than ’2′ after the job title, it doesn’t mean that the person knows how to use an IDE, knows how to use programming/debugging/testing tools, knows the first thing about databases or web technologies, etc…

A “software developer” could be someone who’s done nothing but program command line utilities for Linux in ‘C’ and never even used a modern application framework. And you shouldn’t expect any more than that.

9) When a Real Programmer says they’ve tested their code, all that necessarily means is that they’ve tested for success

A piece of reasonably long code can fail in dozens of ways. Real Programmers program because it’s fun. There’s nothing fun about testing for dozens of potential failure conditions.

However, a good Real Programmer can tell their testers, (the people who are actually paid specifically to test for failure,) exactly where their code is likely to fail and not recover gracefully.

10) Real Programmers like reinventing the wheel

Solving programming problems is fun. That’s why we do it. Very few, if any, problems we face haven’t been faced and solved 1000 times before we encountered them. The generally accepted answer to any programming problem is usually “out there” and easy to find.

But if you want someone to develop an application by piecing together a bunch of standard code fragments, then hire a trained monkey to do it. There’s no fun in that. There’s also no personal development or valuable experience that would be gained by developing your own solution.

Or… wait… is that actually descriptive of Hotshot Superstar Software Engineers?!?

Now I’m confused…

So what qualifies ME to write a developer’s blog?

April 11th, 2010

Frankly, I rarely read any developer’s blogs. But on the rare occasions I somehow get guided to one, it always seems to be from some superstar, hotshot guy who drives an awesome car because of all the bank he’s made in the business, or who has written books and gets quoted by everyone like he’s a prophet, or who seems to know every nuance of some technology like it’s his life’s mission to be THE Subject Matter Expert on it.

I have nothing in common with these people.

I still reference documentation practically every single day for class method names or method signatures that I’ve used repeatedly for years. I’ve been programming professionally for 15 years and as a hobby for over 20, and yet practically every single piece of code I write can be easily understood by someone who’s taken a single semester of programming in the given language. I don’t use “advanced concepts” in any language or framework. I don’t use 90% (or more) of the features that my IDE offers. (Gonna do a whole post on those two.) I know C, and Objective C, and some Java, VB and C++. I don’t know the first thing about PHP, or Perl, or Javascript, or CSS or really even HTML. Why do other programmers assume we all know web technologies? I write desktop… and iPhone apps.

So I’m doing this to give a voice to programmers like me, REAL programmers. I remember back in the day someone circulated an article called “Real Programmers don’t eat Quiche,” which listed a bunch of things one had to do to be considered a “real” programmer. It was meant to be humorous, but in a way I think it touched a nerve. A lot of programmers really do look down on other programmers based on this or that.

Well, my experiences are different, and by nature I tend to see things differently than most, so my perspective is usually gonna be a bit off-center, and often will be considered “way out there”. Hopefully that makes for interesting reading.

Oh, by the way, the name. “One Lazy Programmer.” No, I don’t REALLY consider myself a lazy programmer. I consider myself a real, normal programmer. But I think “software engineers” like the ones I mentioned earlier, who like to criticize and look down on other programmers, would probably use a word like “lazy” to describe programmers like me. So I’ll beat them to the punch and wear the badge with honor. (Anyone considering me for a job in the future, please note that I said I’m NOT really a lazy programmer.)

So what shall I bang on first…