Sunday, November 10, 2013

What I Like About Agile

As an Old Hand, I've developed software under both Agile and traditional process methodologies. The traditional process enumerated requirements, architected the system, and produced the code as sequential phases. Each phase produced a document or artifact for formal review. Since the skills needed for these three phases are actually somewhat different, the various phases could even be done by specialist teams.

What I liked about the traditional process was that it was thoughtful and disciplined. Reviews at the end of each step ensured that all stakeholders agreed that the step was complete and the project was still relevant before the project was allowed to progress.

Several things were uncomfortable about the traditional process.
  • The great bulk of the work took place during the coding phase. There was not an obvious place to review the relevance of the whole project during coding.
  • Too often, the team only discovered it had missed a requirement or failed to do some critical bit of design deep into the coding phase. The end-of-phase reviews were good at checking for incorrect requirements or design issues, but too often failed to catch missing ones. By the time problems were revealed in coding, the project had spent hundreds of man-hours on design or coding that had then to be discarded and re-done.
  • The team could not adapt its practice to improve the project because feedback only arrived at the end of each phase. End-of-phase reviews only helped improve the next project, and only if the team stayed together.
In my humble opinion, the Agile revolution brought just two new thoughts to software design.
  • Performing many small iterations of the requirements-design-code cycle instead of one big one makes feedback available earlier in the project. This gives the team a chance to improve their process to reduce cost, decrease uncertainty, and improve quality.
  • If partially completed software has any functionality, it can be released and start to earn value at once.
A host of specific practices like feature cards, test driven development, or pair programming have arisen, and been packaged into name-branded methodologies like Extreme Programming. But specific practices are not what makes a project Agile. Frequent iteration, frequent release, and frequent feedback make a project Agile, in my opinion.

Agile projects do the same three tasks (requirements, design, coding) that traditional projects do, notwithstanding certain agile extremists who say they don't. A sound Agile process adopts practices that ensure that all these activities provide early feedback, and that the team acts upon this feedback.

Monday, November 4, 2013

Agile Practices Reviewed

I was so horrified by the waste I perceived in prescriptive agile methods like XP that I came very late to the agile party. Here are some specific Agile practices about which I have thoughts.
  • Pair Programming: Pair Programming provides great feedback to individual developers on code quality, but it is very expensive. It's good for teaching inexperienced devs to code, but not so helpful when your team is already competent. Code review provides much the same benefit at lower cost.
  • Code Review: There are automated tools for code review that present the user a visual diff of the modified code, and allow commenting and approval. While these tools are great for reviewing point-changes during maintenance, they discourage thorough review of whole interfaces. It's too easy sitting at your desk alone, to take a perfunctory glance at the changed lines, and say, "Looks good. Ship it." Shops using automated review tools have to keep an eye open to be sure all reviewers are taking a decent amount of time to actually read the code. I have had good results from a heavy-weight design review involving actual meetings. The formality and required prep work for the meetings, and the expectation of face-to-face feedback from peers induced higher quality even before the review.

    Linters and static checkers are tools you can use for code review too, even if you're only reviewing your own code.
  • Unit Test: I love unit test. A good set of unit tests remove much risk from changing software. Unit tests written alongside (not after) developing the code form a check on the consistency and completeness of newly designed APIs. The needs of unit testing focus attention on separation of concerns and isolation of dependencies, which both push up the quality of the resulting code. Unit tests are most effective when devs buy into the idea of testing as you go. If writing the unit tests is just an annoying check-box to a developer, they are unlikely to give it the attention required for a really good result.
  • Code as the Only Deliverable: Some agilists suggest that, since the code is the only artifact shipped to customers, no other artifact has any value. Production of requirements lists, design documents, schedules, and other non-executable artifacts should thus be viewed as wasteful.

    This advice has merit from an aspirational standpoint, but has practical weaknesses. Code is too voluminous and precise a language for expressing requirements. Code is too low-level to express architectural decisions. Code cannot express schedules at all. Other kinds of documents may be necessary for internal communication and review, even if they aren't delivered to customers.
  • Schedules: There is a persistent myth that Agile methods don't do scheduling. This is untrue on many levels.

    At the micro level, the effort for individual features must be estimated, and big features broken up into sprint-sized pieces (if you're doing sprints. But otherwise you're doing big-bang development. Hisssss).

    At the macro level, Agile projects must do scheduling when stakeholders require it. This happens any time they interact with other projects, when long lead time hardware and mechanical packaging must be designed and ordered, or when the dev organization must release by a calendar date or forego important sales (in time for Christmas, for instance). Only a small subset of Agile projects can safely ignore macro scheduling issues, or refuse to predict completion.
  • Refactoring: The world of software development is of two minds on the subject of refactoring. One school believes that any change to released software must be minimal, lest the change introduce bugs. The other school says refactoring is OK if it adds value by making the software more maintainable or more flexible, or better supports new features.

    In my opinion, both thoughts are schooled by experience. If the initial design and coding were weak, and there are no unit tests, then any change is risky, so change must be minimized. If, on the other hand, the initial design was well motivated and good unit tests are available, refactoring just makes things better and better.

Saturday, November 2, 2013

Why Androids are a Bad Thing

Building android robots must be absolutely the dumbest thing human beings are trying to do. It is dumber than genetic tinkering, worse than warming the climate, crazier even than atomic bombs.

The reason is obvious enough. The purpose of a man-sized, man-shaped, autonomous artificial intelligence, boiled down to its essentials, is to replace man. I have absolutely no problem with robots roving the surface of Mars, where I can't go, or cleaning up radioactive messes that would fry my bacon. But seriously, androids exist to put humans out of work, without replacing our need to eat.

Having replaced telephone operators, drafters, assembly line workers, machinists, and managers, (and with our sights set on teachers and university professors) why are engineers beavering away so earnestly trying to replace everybody else? It's not a very smart thing for smart people to do. Haven't they read Frankenstein? Or I, Robot?

I try to be philosophical. Hominids were evolutionarily successful because they could adapt knowledge and social structures faster than DNA could mutate. Androids can evolve their physical structures and processing horsepower faster than DNA too. Maybe the last Neanderthal admired that gracile, tall-walking Homo Erectus. Maybe I can manage to be proud of our robot descendents too.

I'm sincerely hoping to die peacefully in my sleep before this particular turd hits the turbine. Good luck to you new hands though.

Tuesday, October 29, 2013

Creating Frankenstein's Monster

In the familiar story of Frankenstein, Dr. F creates a monster, which later destroys him. Conventional criticism of Frankenstein refers to Prometheus, punished for stealing the gods' fire, or speaks of Dr. F's flawed relationships. But I draw a lesson about the unintended consequences of technology.

See, everyone wants to create the monster. In your mind, you see how it will be; new, and big, and so very, very cool. And you will control it. It will do your bidding. So you build the thing, all in a rush of late nights and exciting revelation. It is only when the monster rises from its slab and starts crashing around that you realize your control may be imperfect. Then the monster does something scary and altogether unexpected, and you realize that control was always an illusion. From apps with security holes to drugs with side effects to disruptive technologies that unravel social structures, unintended consequences are the dark side of innovation. When you solve a problem in a new way, you must consider whether your solution enables unintended results forbidden to previous solutions.

RFID tags are one of my favorite examples of Frankenstein technology. An RFID tag works like a paper label, only you can read it instantly, with a radio instead of your eyes. It doesn't matter if the tagged item is upside-down, on a pallet with 99 other items, or behind another object. At first, RFID looks like a very cool technology. It makes inventory or checkout a snap.

But then the monster starts to stir. RFID facilitates locating items in inventory. It also facilitates theft of valuable items without the need to hunt for them in every crate in the warehouse. RFID facilitates instant checkout, replacing human eyes, so it also enables theft by simply removing the tag. RFID provides remote reading. The walls that once kept your stuff apart from temptation suddenly might as well be glass, except that a metal box or plate is opaque, when you expected all tagged items to be visible. RFID tags on credit cards, driving licenses, and passports, and even the tags on ordinary items like subway cards and card-keys identify individuals, evaporating the anonymity of the crowd. If RFID tags cannot be turned off, they are permanent beacons of identity. If they can be turned off, that function enables a potent denial of service attack against any user dependent on the technology.

These risks emerge directly from unintended uses of the technology as designed, in a world with multiple stakeholders. These risks are quite aside from risks arising from errors in realizing the technology. Any risk might have been mitigated in the original design. Some may still be, but only if all the stakeholders voices are heeded. Dr. F may care less that the monster terrifies some peasants, and more when it kills his own wife. The central problem is that Dr. F created his monster without even considering the trouble it might get into.

The more rush in development, the fewer use cases are considered. If you're building a video game, maybe the worst that happens is the game is unplayable due to griefers. If you're embedding software in a device with a long service lifetime, the greater is the chance that someone will exploit any lack of care in a way that you (or your company) will find painful.

Let the technologist beware, lest your name go down in history as the man who created the monster.