[go: up one dir, main page]

Showing posts with label Agile Methodologies. Show all posts
Showing posts with label Agile Methodologies. Show all posts

Friday, August 28, 2020

Glimmer Process (Beyond Agile)

Glimmer Process is the lightweight software development process used for building Glimmer libraries and Glimmer apps, which goes beyond Agile, rendering all Agile processes obsolete. Glimmer Process is simply made up of 7 guidelines to pick and choose as necessary until software development needs are satisfied. Not all guidelines need to be incorporated into every project, but it is still important to think through every one of them before ruling any out. Guidelines can be followed in any order. 

GPG (Glimmer Process Guidelines):

  1. Requirements Gathering: Spend no more than a few hours writing the initial requirements you know from the business owner, gathering missing requirements, and planning to elicit more requirements from users, customers, and stakeholders. Requirements are not set in stone, but serve as a good starting point in understanding what the project is about. After initial release, only document small tasks going forward.
  2. Software Architecture and Design Diagrams: Perform software architecture and design activities as necessary by analyzing requirements and creating diagrams.
  3. Initial Release Plan: This guideline's motto is "Plans are Nothing. Planning is Everything" (said by Dwight D. Eisenhower) because the initial release plan is not set in stone and might change completely, but is still invaluable in launching a new project forward. Consider including alpha releases (for internal testing) and beta releases (for customer testing).
  4. Small Releases: Develop and release in small increments. Do not release more than 3-weeks worth of work, preferring releases that are shorter than a week worth of work whenever possible. Break releases down. If you need to support multiple platforms, release for a platform at a time. If a feature includes a number of other features, then release them one by one instead of all at once. If a feature involves multiple options, release the default version first, and then release extra options later. 
  5. Usability Testing: Make sure to observe a user using your software the first few releases or when releasing critical brand new unproven features. Ask them to complete a list of goals using the software, but do not help them to complete them unless they get very stuck. Write down notes silently while observing them use your software. Once done with usability testing, you may then ask further questions about any of the notes you wrote down, and afterwards add the notes as feature enhancements and bug fixes. Implement them and then do another usability test with the improvements in place. Repeat as necessary only.
  6. Automated Tests: Cover sensitive parts of the code with automated tests at the appropriate level of testing needed. Run tests in a continuous integration server.
  7. Refactoring: Simplify code, eliminate code duplication, improve code organization, extract reusable components, and extract reusable libraries at every opportunity possible. 

See the up-to-date version of the Glimmer Process at the Glimmer (Ruby Desktop Development GUI Library) GitHub Repo:
https://github.com/AndyObtiva/glimmer/blob/master/PROCESS.md

Saturday, July 29, 2017

How To Test-Drive Rails Generators?

Rails generators are a form of meta-programming in Ruby. They're basically programs that write programs. To be more specific, in my open-source project Ultra Light Wizard (also a pattern presented at RailsConf 2014), the Rails generator ultra_light_wizard:scaffold generates models, controllers, views, routes, helpers and a migration automatically when the user wants to build a wizard.

How does one test-drive such a Rails generator though?

I started by writing a standard RSpec test and then quickly realized that if I were to test-drive what each file generated contains, I'd be opening a can of worms and getting lost in so much unimportant detail.

Then, I had an idea. Why not create a full-fledged Rails app as a fixture and place it under the "spec/fixtures" directory?

Next, how about I configure the app exactly as needed to run the generate command from a test, but then during the test, I leave it untouched and copy it on every test-run to avoid messing up the reference version?

Well, to explain, you end up with an RSpec test that looks like this:


Now, comes the tricky part. How do you test by invoking the generator on the project copy?

Well, I took the easy way out and simply dropped down to the shell from Ruby as follows:

In other words, by invoking "rake" from the Rails project copy, I started another RSpec test suite run from within my main test suite. Totally meta, eh!?

Now, here comes the meat of the work, in the form of feature specs written for the Rails project copy. To explain, the tests shown below are written from the user's point of view for the effect of running the generator in a real Rails app (the top part consists of helpers [e.g. fill_in_project_fields] and the bottom part has the test cases as "scenarios" [e.g. scenario 'can start wizard']):

And with that... problem solved. Here is an example test run:


Notice how the inside RSpec test-run (from the reference Rails project copy) reported 5 examples, and then the outer RSpec test-run reported 1 example (the one that spawned the rest).

Pros to this approach are success in covering end-result of generation, and decoupling tests from detailed work of generator.

Cons are obviously increased complexity, albeit that is balanced with decreased complexity of writing the inner integration specs and freedom of implementation.

How have you handled Rails generator testing in the past? Care to discuss pros and cons?

Sunday, November 06, 2011

Agile Processes/Technologies in Non-Agile Hands - Part 1 of 4

One of the things that I encounter often with folks disenchanted with Agile processes/technologies is that they tried them without guidance or proper training (e.g. XP or Ruby on Rails), failed due to reasons mostly related to lack of skill, and then too quickly blamed the failure on the technology and/or Agile process. However, too often what I see when I check their codebase or environment is not Agile, yet Adhoc software development disguised as Agile and sprinkled with a few Agile practices here and there superficially applied.

To be more specific, here are the things I see in their environment:

  1. Under-engineered code design with too much coupling between all the different pieces.
  2. Difficult-to-maintain test suite.
  3. Lack of long term vision for features being developed.
  4. Extremely under-optimized code performance.

Now, let's address each of these items in their own blog post, including what the misconceptions Non-Agile folks have about them.

"Under-engineered code design with too much coupling between all the different pieces"

In the Waterfall process, it is recommended for developers to spend time designing the code up-front before diving into implementation. That prompts developers to often think of all the pieces, study coupling and cohesion, and come up with a design flexible enough to handle future requirements. Now, it's important to have software design as a skill for any developer Agile or not in order to ensure the code is maintainable. But, here is where misconceptions come into play with many inexperienced developers. Waterfall developers think that if you design for any possible future requirements, the codebase will be so easy to maintain when the future requirements come that they will be extremely productive. Thinking of it black (extremely engineered) and white (no engineering) like that results in code so over engineered that developing to meet today's business requirements comes to a crawl in terms of productivity since the code design tries to address requirements way far into the future, maybe even 6 months or a year ahead. This in my experience results in code so difficult to maintain that it discourages developers from following good design practices in the long term as they take shortcuts here and there due to business pressure whenever adding new features, resulting in a terrible code base.

People new to Agile that are not skilled with it get a different misconception as a result. They think that Agile is a license to under-architect software for the sake of simplicity, forgetting about Agile incremental evolutionary software design. That works well for the first few weeks of a project, but then very quickly deteriorates to extremely under-engineered software for business requirements, and extremely terrible software performance, either making the developers eventually get disenchanted with Agile, or making developers outside of the Agile community snicker (whenever you snicker know you're risking loss of insight about something) thinking "we are disciplined spending time on architecting for performance and future requirements from the start unlike these folks". Unfortunately, that statement also shows lack of experience with successful sustainable business software delivery as the reality of a system over-architected from the start is like I said before, extreme lack of productivity at the beginning of a project that loses business trust, and then neglect of developers to keep the code design of high quality in the future due to how over-complex it is, resulting in a code base that keeps getting worse in over-complexity till the project needs a rewrite. Though some developers accept that as a fact of software development, in my opinion and experience, that's not true beyond that they choose to write code that makes their life hell and then convince themselves that they are just accepting the "reality" that they themselves created.

Now, to best explain how Agile code design flows through a project, here is a diagram that contrasts code design complexity in Agile over time vs code design complexity flow in Waterfall and Adhoc processes (note that the diagram is used strictly for communication. It is not based on statistically collected data):

Code design complexity goes higher as more structure is added to the code. Think of 0 as an entire code blob lumped together in one file and 12 as code organized into many files along the lines of object oriented domain driven design and design patterns.

There are multiple things to note about the diagram:

  1. Complexity in the Adhoc process remains quite low over time as Adhoc developers tend to be averse to advanced design techniques such as object oriented design, inheritance, and design patterns since they consider them over-engineering.
  2. Complexity in the Waterfall process tends to go up in spikes since Waterfall developers tend to do big up-front design of software adding a lot of structure way before the need has presented itself.
  3. Complexity in the Agile process gradually climbs up following day-to-day business requirement complexity instead of preparing for future business requirements far in advance or simply ignoring complex design techniques in the name of simplification. Notice how the Agile graph overlaps the Adhoc graph in the first 2 months before it diverges later to handle increased complexity in business needs.
  4. Complexity in the Agile process tends to have dips that signal how Agile developers occasionally refactor their code to simplify when business requirements have changed and the code has become over-engineered for their current requirements.
  5. The area in the diagram between the Waterfall graph and Agile graph represents lost productivity by Waterfall developers due to having to work around over-engineered code.
  6. The area in the diagram between the Adhoc graph and Agile graph represents lost productivity by Adhoc developers due to having to work too hard with weak or non-existant abstractions (under-engineering) that makes them have to deal with difficult to detangle code (lack of separation of concerns).

Now, all of this might be common sense to a lot of developers. But, it takes a lot of practice to get disciplined at refactoring your code design regularly and boldly (with the protection of automated tests) to ensure it is neither over-engineered nor under-engineered per today's business requirements.

Still, there are quite a few other misconceptions that I would like to address, such as:

  • This all sounds good, but in "reality" developers do not have time to refactor their code regularly while still meeting goals for today's business requirements: This is like saying "All the health professional talk about food and exercise sounds good, but who has time to eat well and exercise while still being able to earn their buck for their families". Indeed, this can be a dilemma for many people, and I do not deny it. I am not the most disciplined at exercising for example. But, this is not how I think of it. The way I like to think of it is more along the lines of: "If I were to actually eat well and exercise, would I perform better in other areas of life?". Since the answer is yes, I trained myself to appreciate the taste of healthy food, thus eventually effortlessly consuming such food out of habit. I also figured out alternative ways to exercising like snowboarding, frequent longboarding, and playing the drums, keeping myself in shape, again effortlessly. Going back to our original point, "If I were to become very disciplined at refactoring code regularly, will I be better able to meet business demands in the long term?". The answer from my experience is a most certain yes. In fact, you end up meeting the goals of today's business requirements much more often in the long term if you have a code base that is neither over-engineered nor under-engineered for today's needs. So, the key thing in the short term is to keep practicing the skills of Agile software development until you effortlessly develop the habit of just-enough-design and disciplined refactoring.
  • What you describe as helpful in increased software design complexity is not really helpful for me. I never quite got the point of object oriented design, and I like to break my code into many small methods to handle complexity: While code design is certainly subjective to an extent. When working with a large code base that will be maintained by many future developers. In my experience, organizing code in an object oriented fashion around the business domain concepts helps free my mind from thinking about blobs of code to focus on abstract concepts that relate to business directly, thus better manage complexity. It takes quite the skill to think of code as abstract concepts instead of low level data shuffling with for loops and if loops, bringing us back to the idea that this misconception might be more due to lack of skill in object oriented design as opposed to a problem with the methodology itself. The same way structured procedural design is a step-up from being able to code if statements and loops in one procedure effectively, object oriented design is a step up for business application development in my opinion from structured procedural design, and it assumes you are quite skilled in structured procedural design as a pre-requisite. Of course, there are domains that benefit better from other paradigms, like the functional programming paradigm for mathematical computing and logic paradigm for boolean algebra, which is why I am recommending object oriented programming specifically for domain business model related code that has gone far in complexity beyond the manageability of structured procedural programming. If you find yourself uncomfortable with object oriented design, I strongly recommend getting more training in it, especially from people that are skilled at applying it in practical enterprise environments as opposed to class room settings.

My next blog post will focus on "Difficult-to-maintain test suite.". Stay tuned for part 2.

Monday, October 17, 2011

Any Good in Learning Big Up Front Design?

Since the beginning of the Agile movement, people have been renouncing big up front design in favor of incremental emerging design of software. It occurred mostly as a reaction to overly complex designs and application architectures that slowed down productivity to a crawl, especially on new projects. Anyone remembers the days of EJB 2.1? You had to configure a handful of XML file descriptors and write quite a bit of code following the EJB 2.1 conventions before you got anywhere on a new project. Now, contrast that with the Ruby on Rails architecture and how it enabled developers to get a CRUD web application up and running within minutes. Most people miss out on the other side of the story though, that is the part about why EJB's architecture was so complex to begin with. After all, development with a technology like Java's JSP alone was a lot simpler.

EJB came out as a reaction to insecure badly written Java web applications that did not take advantage of transactions and threading correctly, and mixed data persistence concerns with web flow concerns. Enterprise JavaBeans's main innovation was aspect orientation of concerns like transactions, security, threading, and caching, enabling Developers to get their benefits without coding them directly. All they had to do is configure the XML descriptors to enable them, and voila! Additionally, Entity Beans were of the earliest forms of ORM (Object Relational Mapping), Session Beans were one of the earliest forms of controllers, and Message Driven Beans were one of the earliest forms of web background workers (think Resque in the Ruby on Rails world). Also, Enterprise JavaBeans allowed nice separation between web flow and data persistance at the time (albeit with non-existent OO inheritance support), providing one of the most primitive versions of Web-MVC.

Now, if you look at many of today's web frameworks, like Rails for example, you notice that they got many of the same features that EJBs innovated, such as easy transaction support, ORM, security (authentication via Devise and authorization via CanCan for Rails), threading (automatic spawning of threads for requests in web servers like Phusion Passenger), and caching.

Sure, if you needed something much simpler, you can start a Rails app without any of the complexities of caching or other advanced features getting in the way. You may even use Sinatra and avoid reverting to Rails until your app grows large enough to warrant the shift in complexity of design.

However, if people did not spend time solving the problems plaguing web applications in the early 2000s, doing some big up front design, we might have not had any of the solutions that can help an Agile business scale up with Rails today once they have more than a handful visitors an hour.

How does this idea transfer to software design?

Developers new to object oriented programming often encounter topics such as inheritance vs composition and design patterns. If they were to bypass such topics with excuses like "inheritance produces overly complex code over procedural code reuse" or "design patterns make for overly complex designs", their blaming of the tools is sure a sign that they are not skilled with them and that they use them like a golden hammer, potentially in the wrong situations. Now, if they were to avoid learning them however, and they eventually work on a business domain that grows enough in complexity to a level where switch/giant-if statements are plaguing the code everywhere (lack of use of object oriented polymorphism/inheritance or design patterns), given that they skipped learning the techniques of big up front design, they will unfortunately fail in incremental designs as they will always stick to the primitive design techniques they started programming with (like structured decomposition of logic into methods) and thus not be able to scale up their code base to handle more complexity while remaining clear and maintainable to their fellow programmers.

That is why I strongly suggest to developers spending time learning many of the big up front design techniques out there, such as Responsibility Driven Design, Domain Driven Design, Design Patterns, Architectural Patterns, UML Modeling, and even Model Based Development (foundation behind use of DSLs) , simply for the sake of expanding their toolset for when their small simple solutions no longer provide high productivity and maintainability for the increased complexity of their project. In other words, practicing big up front design in a safe learning setting enables developers to actually become (Oh My!) more Agile when the time comes for employing the advanced design skills.

I am sure I missed a handful of useful design tools out there, so I would like others to pitch in. What design tools would you recommend to others to learn and employ in their toolset?

Wednesday, November 17, 2010

Pain Driven Development

One of the key things I learned from XP and the Agile movement in general is programming for today's requirements not tomorrow's predictions. And, every time you start writing implementation for requirements that may become valid in the future, the Agile folks would shout "YAGNI" (You Aren't Gonna Need It). Applied to code architecture and design, Ward Cunningham summarizes this philosophy nicely with his famous quote "What's the simplest thing that could possibly work?".

But, what happens when today's implementation no longer fulfills today's requirements? In other words, what happens when tomorrow becomes today and requirements grow or change? One example is when 100,000 more users are added to the system, making performance requirements much greater. Another example is when supporting one state is not enough anymore, and the business is now expanding nationally to cover all 50 states.

That is where awareness of pain comes into play. I wrote a blog post about sensitivity to pain a few years back that talks about pain and pleasure when it comes to writing and maintaining code. Developing that awareness of pain is highly important in detecting when to update today's implementation with a higher level of complexity that addresses today's requirements.

Though people have different levels of tolerance to pain, it is a gift that they can feel it as it is often what pushes them toward action. And, in the case of software development, it can point out when today's implementation no longer serves today's requirements and needs to be revised either with a higher level of complexity, or sometimes with a lower level when some requirements are no longer needed.

When I first heard of the YAGNI principle, I remember shuddering a bit and thinking "Isn't it kind of dumb to write code that I will revise in the future when I have to support more states when I could have added in multiple-state support to begin with?"

Well, unfortunately, my thinking was shallow in certain ways. While the argument is logical at one level since following flexible design practices seems to make it easier to handle some future needs, it is much less trivial if I dig a level deeper and include more variables such as whether these future needs ever materialize in the next 2 years, or how much stepping around I am doing while adding new features, mostly because of complexity in code implemented for predicted needs that are not yet valid.

And experience only confirms the concerns I raised above and shows that keeping the code as simple as possible, only addressing today's known business needs, seems to make it easiest to maintain the code and add more features as more needs come up. That is because the code always remains as simple as possible, yet adjusted in complexity only as pain is felt day-to-day.

One example of this that I recently encountered is writing a web feature that relied on data from a web service. At first, the simplest thing that could possibly work was to have it request data from the service synchronously as users hit the site. Later, as requests for data got more complex and time-consuming to fulfill from the service, the implementation became painful to deal with as far as performance, so background caching of service data was added. That is a very good example of what I like to call "Pain Driven Development" :)

Monday, November 15, 2010

What Continuous Integration Is Really About

Recently, I have been encountering a number of environments where developers work in multiple branches and do not integrate their code till the end of the iteration. They end up often spending hours fighting to merge the code in correctly, sometimes resulting in bugs or missed features.

When I see that, I cannot help but remember the pains of integrations in 6-month-long Waterfall projects. I was a junior developer at an environment in the past where developers spent 6 months implementing features in isolation of each other, and then only integrating right before the project deadline. As a result, they would run into enormous integration issues and spend 3 additional months fixing all of them before finally delivering.

Now, developers who integrate at the end of the iteration often end up with a similar result. They miss the deadline sometimes by a day or more, and end up with issues bleeding into the next iteration (e.g. missing features due to bad merge).

When I encounter such environments, and hear that developers branch out at the beginning of every iteration before developing their own features, I shudder and point out that they are not following the Agile practice of Continuous Integration. They immediately shoot back saying something like "We have cruise control setup" or "We do not have the resources to setup a CI server", which only reveals ignorance about what Continuous Integration is really about. What I was actually saying is they are not integrating continuously into one common branch, and thus not resolving integration conflicts on an hourly or daily basis, yet letting them accumulate till the end of the iteration causing an integration snowball effect.

It is an unfortunate matter of human nature to be lazy at acquiring knowledge. You always want the least amount of learning to get you to where you want to go, so often people fail to dig deeper than what they hear and miss out on the deepest essence of what they are learning. For example, a lot of developers learning MVC from frameworks like Struts or earlier editions of Rails know just enough of MVC to get by, but never spent time digging into the true essence of MVC from Smalltalk Applications (or desktop development in general), and thus fail to apply it correctly. You end up with bloated controllers, instead of splitting most of the non-control behavior into Models. In the same token, a lot of developers who hear of Continuous Integration from the marketing lingo of CI servers think that is what Continuous Integration is all about.

Here is how Martin Fowler describes Continuous Integration:
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.


Notice how the primary emphasis is on members integrating their work frequently; at least daily if not multiple times a day. Also, see how having the automated build is secondary and only there to support the primary goal. So, when developers work in their own branches and do not integrate till the end of their iteration, they are not fulfilling the primary goal of resolving conflicts often before they get big and hard to resolve, and having a CI server does not make them a team that is properly doing Continuous Integration. While a CI server certainly helps them when they integrate at the end of the iteration, they still have to deal with bigger integration issues than if they were integrating daily if not hourly.

Now, working in branches certainly has its place. It is useful when doing a spike, building an experimental feature, performing big architectural changes, or even working on a separate release all together that would not go out till a few months later. Of course, in the case of a separate release, the code would probably not get merged back into master and can be thought of as a separate project (even if it branched off the original project's code base). And, in the case of big architectural changes, it is preferred if possible to have them done in small slices within iterations, and only relying on a branch as a last resort.

Local branches in source code control systems like Git and Mercurial have their place too. You can perform work in a local branch every day if you like as long as you integrate it back to the main branch at the end of the day or every few hours. Used that way, it would still be in line with the practice of Continuous Integration.

Takeaway?

Integrate early and often on the same branch (daily/hourly) and you will leverage the benefits of Continuous Integration on your Agile project by delivering more on time and avoiding big merge/conflict issues.

Friday, October 22, 2010

Rails Tutorial Submission at EclipseCon 2011

I submitted this tutorial proposal for EclipseCon 2011. If it catches your interest, please comment/vote for it by following the link at the bottom



Title: Behavior Driven Web Development with Rails, Aptana, and Cucumber



Abstract:


If you have been waiting for a good excuse to finally plunge into the Ruby world and learn the wildly popular Rails web framework, this is the tutorial to attend at EclipseCon. Not only does it cover the basics of Ruby and Rails, yet also behavior/test driven development with Cucumber/RSpec. The Eclipse-based Aptana RadRails IDE will be the main tool used in this tutorial.



The tutorial will include:


  • Basics of Ruby including some basics in meta-programming

  • Basics of Rails covering MVC and configuration

  • Basics of Behavior Driven Development with Cucumber

  • Basics of Test Driven Development with RSpec

  • Basics of the Eclipse-based Aptana RadRails IDE for Ruby on Rails development

  • An example Ruby on Rails application that walks students through learning the different components of a Rails application




Most of the material of the tutorial will be inspired by the Rails training courses I have given at Obtiva, such as the Rails TDD Bootcamp and the Rails 10-week Evenings Course (self-authored)



Link:


https://www.eclipsecon.org/submissions/2011/view_talk.php?id=1985&search=difficulty%3D%22easiest%22

Tuesday, June 08, 2010

Sunday, May 30, 2010

Pair Programming in the Wild

Pair-programming is probably one of XP's most controversial practices, and that may have been one of the reasons I initially got attracted to it about 5 years ago. After all, sticking only to practices that are mainstream, will end up with mainstream results, yet studying out-of-the-box practices that may potentially yield a world's difference in productivity and quality, is like discovering an O(1) algorithm in comparison to O(n): big difference!

So, what is pair programming about anyways?

In a keynote presentation that I gave at the Agile Comes To You Seminar last week, I defined it as:
Two
 programmers 
solve
 problems 
together 
on

one 
machine:

  • The
 driver (person on the keyboard)
 focuses 
on 
writing 
code

  • The 
navigator 
focuses 
on 
strategic 
direction
Notice the emphasis on how the programmers "solve problems together" as opposed to write code together. In other words, writing code is not the bottleneck, solving problems is.

If writing code was indeed a bottleneck, then pair programming would have been a very different skill. It would have been about one programmer learning how to type on two keyboards at once instead of two programmers typing on one keyboard. It would have been about dedicating your left brain for one computer monitor and your right brain for another. It would have been about writing code that writes code for you. All of these things would have been interesting skills to master if writing code was the bottleneck.

In reality though, writing code is just a tiny concern in comparison to solving big programming problems for business. And, here are just a few examples of the problems I am talking about:
  • Where do I put the responsibilities for reporting on a collection of objects to make the code as maintainable as possible in the future?
  • What is the most efficient SQL query I can write to have the report run fast?
  • Do I need pagination or is the result set small enough?
  • Is it worth applying the State Pattern to this problem or are the state related actions few enough to warrant not applying the pattern?
  • Do I need a layer of presentation objects between the models and the view or would the code end up simpler without it?

I cannot emphasize how often I have spent hours on such problems on my own, only to take a break and talk to another developer, and then get an immediate solution from their point of view.

That made me curious about all the scenarios that benefit from pair programming:
  • Decisions related to code aesthetics/API often get resolved quickly when validated against another developer's opinion, finishing faster, and with clearer code.
  • When deciding on one of multiple alternative solutions to a problem, a developer working alone may hesitate quite a bit about picking what is best for the team. Having a second developer present provides more confidence and speeds up the decision process.
  • Synergy is the idea of 1 + 1 > 2. This can help a lot in solving problems that involve creativity. Often developer A has one solution in mind that is not optimal and developer B has another solution that is not optimal. So, leaving one developer to implement his solution alone may yield mediocre results whereas having the two developers discuss their solutions first may yield a new solution that is much better than the two original ones.
  • When solving a problem that requires multiple skills (e.g. OO skills vs SQL querying skills), it is common that no one developer on the team is the best in all of them. So, having two developers work on the problem will increase the chance of addressing all parts of the problem optimally, and at the same time cross pollinate the developer skills. For example, I have learned quite a bit from pairing with a developer who was proficient at SQL, while I helped him learn quite a bit about OO design.
  • When the driver spends too much time focusing on a problem that is of low priority, the navigator who has more of a bird's eye perspective will often notice that quickly and prevent the driver from getting derailed for a few hours unnecessarily.

Under the surface though, there are less apparent under-estimated benefits that improve developer skills and the development team quite a bit in the long term:
  • Having developers socialize while programming on a daily basis increases team bonding and commitment toward the success of the project.
  • It can be quite fun, thus greatly motivational.
  • When developers of different experiences pair together, they cross pollinate their knowledge, learning quite a bit from each other, and getting stronger in the long term. One example of this is the number of shortcuts I learned while programming with the Eclipse IDE on Java projects. I got to a point where I can almost do anything by keyboard without ever wasting time reaching for the mouse. And, whenever I paired with new programmers, they would get surprised by the number of shortcuts I knew, and tell me that it intimidated them to learn that many shortcuts. I had to explain to them that it was like watering a plant: I learned all my shortcuts a few shortcuts a week over 12 months of pairing with different developers, thus expanding minimal yet consistent effort.

Given that I am clearly sold on pair programming, does that mean I do it all the time? Well, there are cases when I avoid it for practical reasons:
  • I get exhausted from pairing for 5 hours straight. Yes, pairing can get exhausting, so it is important for a pair to realize the point at which they need to take a break from pairing.
  • I come to work tired from lack of sleep. I know I would not be effective pairing in that mode.
  • I have boiler plate work that is mind numbing such as data setup or the like. In this case, typing would indeed be the bottleneck, that can be a bad sign indicating lack of automation or having the wrong person do the job (developer doing the job of a data entry clerk).
  • I would like to work with a new technology on my own for a while in order to solidify my learning of it after having spent sometime pairing with someone on learning it.

So to summarize, pair programming is about synergistically solving problems, not just having two developers typing on one machine. As a result, the benefits are:
  • Increased productivity
  • Higher code quality, indirectly contributing to productivity in the long term.
  • Better solutions, indirectly contributing to customer satisfaction.
  • Increased team commitment
  • Continuous improvement to developer skills

Comments are welcome, especially to share personal experiences or ask questions about pair programming in the wild.

Tuesday, November 24, 2009

Conditionals in Unit Tests

One of the questions newcomers to TDD (Test-Driven Development) often ask is: how can I trust test code to be correct?

Well, the reality is that it is not black and white. Not every instance of implementation code is prone to bugs (think getters and setters) and not every instance of test code is perfectly free of bugs. However, as software engineers, we are more concerned with the practical aspects of software development, and experience seems to indicate that if you write your test code in a linear fashion without using conditionals, then it is less prone to having bugs, and thus can serve as a useful tool in driving reliable implementation code as per the requirements specified.

Back to the question: how can I trust test code to be correct?

Test code often follows this structure:
  • Pre-conditions setup
  • Action being tested
  • Post-condition verification
For example (in Ruby):

# pre-conditions
time = Time.new

# action
hours = time.hours_between(9am, 2pm)

# post-conditions (specified with RSpec)
hours.should == [9am, 10am, 11am, 12pm, 1pm, 2pm]

Since that code is linear and free of conditionals, if it parses successfully, it generally expresses what it says without much ambiguity and thus has very little chance for error.

Now, let's look at a version of the implementation after a few tests have been written:

def hours_between(start_time, end_time)
  (numeric_time(start_time) .. numeric_time(end_time)).map do |numeric_time|
    textual_time(numeric_time)
  end
end

def numeric_time(time)
  meridian_indicator = time[-2..-1]
  numeric_time = time.delete(meridian_indicator).to_i
  numeric_time = meridian_indicator == "am" ? numeric_time : numeric_time + 12
  numeric_time = 0 if numeric_time == 24
  numeric_time
end

def textual_time(numeric_time)
  meridian_indicator = numeric_time < 12 ? "am" : "pm"
  textual_time= numeric_time < 12 ? numeric_time.to_s : (numeric_time - 12).to_s
  textual_time= "12am" if textual_time == "0am"
  textual_time+ meridian_indicator
end

Do you notice how much complexity there is with reading statements that involve conditionals. It's doable, but definitely takes work despite how factored the code is.

Note that the implemented functionality is not entirely correct as it only works if the range specified is between 1am and 11pm. More tests need to be written to drive the rest of the implementation. However, given that tests do not have any conditionals, they provide us with an automated way of testing that our implementation works according to plan.

So, avoid conditionals in unit tests, and you will benefit from them in implementing more reliable code.

Thursday, March 26, 2009

XP West Michigan and EclipseCon 2009

Tyler Jennings, Jake Scruggs, and I went on a road trip to Grand Rapids, Michigan yesterday and gave two presentations at the XP West Michigan user group. Jake presented "What’s the right level of testing?", and Tyler and I presented "Pairing Parody". Atomic Object hosted us and gave us a tour around their office. They had a nice open environment for pair-programming and a street light that signals the state of project builds (red for failure, green for success, and yellow for build in progress). The experience was very hospitable overall thanks to Michael Swieton.

Today, I flew to the Silicon Valley area around San Jose, California to present at EclipseCon 2009 tomorrow. I will be giving a short talk about Glimmer and the current state of the project.

If you have any specific questions about the project, feel free to list them here in the comments, and I will try to cover them in the talk.

Monday, March 09, 2009

Agile 2009 Workshop: TDD Ping Pong Match!

By the way, Dave Hoover and I proposed a workshop for Agile 2009 titled: TDD Ping Pong Match!

It's a new and improved version of the same workshop conducted in Toronto last year at Agile 2008.

Abstract:

Attendees will be entered into a competition where they will pair-program on implementing small software application features following the TDD Ping Pong game rules. Each game will last for a few minutes, and the programmer with the least time driving (i.e. doing the simplest thing that works and coming up with the most tests) will be declared winner. This game is a great opportunity to learn TDD and Pair-Programming effectively and pragmatically. Winners will receive prizes, so sharpen your TDD Ping Pong skills and get ready for Andy and Dave’s challenge!

Tuesday, March 03, 2009

Agile 2009 Talk: Pairing Parody

I submitted a proposal for this talk with Tyler Jennings:
http://agile2009.agilealliance.org/node/2370

Comments are welcome.

Abstract

Pair programming requires a certain level of social interaction (Yikes!!!) that quite a few developers are not accustomed to - even with their peers. Learning to work effectively with people of different personalities and skills (or even Jedi powers) can sometimes seem daunting compared to the typical habit of working alone in a dark corner. We’ve seen pairing work wonderfully, we’ve seen it be an abysmal failure that involved some shouting and storming away, and we’re going to bring our best-of moments to you in comedic fashion.
Process/Mechanics

First we’ll cover the basics for anyone who isn’t familiar with the drama that often accompanies pair programming. Afterward, Andy and I will be acting out various skits, demonstrating both effective and dysfunctional pairing scenarios. Discussions (and laughter or cries of terror) are highly encouraged at the end of each skit.

Basic Patterns

* Be Verbose
* Questions Not Demands
* Pair Negotiation
* Ping Pong Programming
* 2 x 2 Pairing
* Independent Research / Spikes

Mentoring Patterns

* Let the student drive
* Let the student fail
* The third wheel
* Test Mentor

Anti-Patterns

* The Prima Donna Programmer
* The Apathetic Programmer
* The Human Compiler
* Worshipping The Hero
* The Professional Driver

Learning outcomes

* Recognizing a dysfunctional pairing session
* Patterns for pairing effectively with various personality types and skill levels

Tuesday, February 03, 2009

Obtiva HackFest and Glimmer Tree Data Binding in Progress

Today, instead of having our weekly GeekFest meeting at Obtiva, Dave organized a HackFest instead.

Colin, Dave, Roy, and Nate worked on
Colin's ultimate midi controller project. Tom, Leah, and Jake worked on the Metric Fu Aggregator. And, Tyler, Turner, and I worked on implementing Tree Data-Binding support for Glimmer.

In one hour, we made as much progress as we could, which amounted to coming up with a design for the data-binding syntax in this unit test:


module Node
attr_accessor :parent, :children
end

class Person
include Node
attr_accessor :name, :age, :adult
end

def test_root_node_only
adam = Person.new
adam.name = "Adam"

@target = shell {
@tree = tree {
items bind(adam, :name)
}
}

assert_equal 1, @tree.widget.getItems.size
assert_equal adam, @tree.widget.getItems.first
end


To bind SWT Tree items to a model node hierarchy, you pass two parameters to the bind command: the root model node and the name of the attribute to be used for displaying text in the tree.

More tests will eventually be written to test data-binding a root model with children.

In the future, this will probably grow in other directions to handle tree node images and selection.

Test-driven design is a common practice at Obtiva as it gets many birds with one stone: ease of exploration of ideas, test-coverage, clean code design, and higher productivity due to incremental development preventing endless hours of debugging.

Stay tuned for more on Tree Data-Binding support in Glimmer in the future.

Sunday, December 21, 2008

Black Box vs Invisible Box of Professional Practices

When a non-technical client requests software development services from a consulting shop, and they agree about the development of a certain application (the what,) is it a good idea to discuss the practices that will be followed in order to get the job done (the how,) or would mentioning the practices be irrelevant to a client who is not technical and has no expertise with software development?

For example, a certain consulting shop may choose to build the client's application following XP practices because from experience, developers at that shop have found that they finish developing features faster and with higher quality when following the XP process.

Is it relevant at all to mention these practices to a client who knows nothing about XP without being asked first? Or is it initially better to provide the client with the minimum amount of information needed, such as the number of developers, hourly-rate, estimated development time, etc...?

If the client was curious to know how the application is being developed, the details can always be provided when asked, but the reason I am asking the questions above is because often, mentioning these details too early can muddy the water and give the client authority over practices they do not have the qualifications to decide on.

For example, mentioning radical practices such as test-driven development and pair-programming sometimes stirs up discomfort and confusion about how these practices work due to lack of experience with them (like the classical misconception that the two practices aforementioned reduce productivity instead of increasing it.) The client then may demand removal/adjustment of the practices when in fact, the main reason the practices are followed is to serve the client in the most professional and productive way possible. That leads to the often frowned upon micro-management of practices, thus hindering creativity and productivity.

Without micro-management on the other hand, when the client is told he is sold the work hours of four developers with a certain estimate for application release, the developers can choose any way they desire to accomplish the client's goals. If they decide that it's more productive to pair on certain tasks and always write tests first, that is their decision. Trusting them with that freedom can often result in higher creativity and productivity in the long run, freeing the client from focusing on practices (the how) in order to focus on coming up with the best product to build (the what.)

After all, when I request from a company to build me a house, I do not want to be bored with all the technical details about how it is going to be built. I just want a quality home built by a certain date. If I get interested enough to know about the practices followed, I will ask, but I do not want to waste my time hearing about them before the beginning of the project if they do not affect my involvement with it.

In either case, clients still need to hear about the practices that directly affect their involvement with the development of the product. For example, stand-up meetings and iterative planning are two XP practices that need to be agreed on with the client before beginning the project in order to get the client's commitment and effectively apply them.

How would you answer the questions asked above? Are you in favor of having a black box or an invisible box around the professional practices that do not directly affect the client? Keep in mind that in either case, the box is transparent. It's just that with the black box, the client has to look closely to see them (like looking at tinted glass) whereas with the invisible box, all the details are being shown even before asking for them.

Wednesday, December 17, 2008

Pair Programming Tour

Corey Haines is a journeyman craftsman who's actually on a pair-programming tour.

He visited us at the Obtiva studio in Chicago on Dec 5 and pair-programmed with my colleagues Joseph Leddy and Turner King:


Here is a video of an interview he also conducted with our Chief Craftsman Dave Hoover:


His tour blog is interesting in general as it includes interviews with a number of software craftsmen such as Uncle Bob, Brian Marick, David Chelimsky, and Micah Martin. Check it out here:
http://programmingtour.blogspot.com

Monday, December 15, 2008

Dynamic Languages Replacing Static Ones?

A number of years ago, Uncle Bob (Robert C. Martin) wondered if dynamic languages would replace static languages for mainstream software development.

Here is a summary of an article he wrote in 2003 titled "Are Dynamic Languages Going to Replace Static Languages?":

For many years we've been using statically typed languages for the safety they offer. But now, as we all gradually adopt Test Driven Development, are we going to find that safety redundant? Will we therefore decide that the flexibility of dynamically typed languages is desirable?

The static languages being referred to by the article are typical mainstream static languages such as Java and C++ by the way, not languages like Haskell or Objective Caml.

When writing all code test-first with 100% test coverage for the implementation, one seriously begins to wonder if the static typing is helping or in fact hindering design flexibility in comparison to dynamic typing.

Here is a metaphor that may help us understand this better.

Static typing is like choreographing circus stunts with safety built into the moves. The circus people can only practice moves that are safe and would not result in them crashing down in front of the audience. While such moves will still look good due to the great practice put into them, they are generally conservative due to safety consciousness.

On the other hand, dynamic typing is like choreographing wild circus stunts, attempting such wild moves that if successful completely wow the audience.

Traditionally, performers would get into horrible accidents while practicing such wild moves, which is the reason why static typing became the norm in software development for a while. It was after all, the only way to achieve reliable business software delivery.

However, when the concept of a circus safety harness was introduced, performs were able to practice the wildest move and still be safe from injury. This is similar to having a unit-test suite acting as a safety harness for all the dynamically typed code you write. It lets you go wild with your software design, achieving unmatched productivity and flexibility with your code while remaining safely covered with your tests.

So, did test-driven development change the playing ground enough to enable developers to leverage dynamic languages in favor of static ones without the traditional fears that encouraged static typing in the first place?

Friday, December 12, 2008

Writing More Code Is Easier!?!

The statement in the title may sound like an oxymoron, but some developers I met seem to believe it.

A developer came to me one time asking me for help with a problem he struggled with for hours. The first question I asked, as I usually do: "Are the tests passing?" His response was that he was under a lot of pressure, and he had to deliver a feature very quickly for the manager. In other words, no tests were written, and the developer was planning to write them after he had verified through the web browser user-interface that the feature is fully functional.

I took a look at the code and saw that it consisted of more than 50 lines of heavy logic packed into one gigantic method. It was difficult to read and decipher, but after I got a grasp on what the developer was attempting to do, I realized that most of the logic could have been done with one line of code using one of the 3rd party libraries used heavily on the project.

I asked the developer if he thought of checking whether that feature was in the library first before writing it from scratch, and he said he was so much in a hurry, he did not want to spend the time on that kind of research.

Well, I ended up writing a test as if the method never existed (keeping with the test first philosophy,) commented the method code out, ran the test and got it failing, added a few lines of code to pass the test, wrote another test, wrote some more code including the one line needed from the 3rd party library, and voila. A method with no more than 7 lines of code was working perfectly according to the tests that specified the behavior.

So, while it may seem like writing more code is easier than doing the hard work of DRYing it up, applying design patterns, and reusing libraries that can help, more code means more code to maintain and more potential bugs to deal with. In other words, more work in the future that becomes a lot harder to deal with than writing less code that is simpler and more reliable.

Sunday, December 07, 2008

NIH Syndrome or Blowing Reuse Out of Proportion?

I see a lot of developers nowadays favoring re-inventing the wheel over reusing an external library or component, especially in the Ruby and Javascript communities.

The first thing that pops in my mind when I see them doing that is NIH (Not Invented Here syndrome.)

From my experience, reuse is not just about getting the features I need. It is also about stability, performance, usability, and other non-functional requirements that have been ironed out by the people who have created the reusable component and the communities that consumed it. Unless it fails in most of these areas, I usually don't see the point of re-inventing the wheel. While it seems simple to do on the surface, it is those hidden edge cases and bugs that creep up now and then that get me when re-writing it. This happens in the code when I have not thought of every possible scenario and in the design when I have not exercised the component in a real-world setting.

What's the flip-side to this argument though?

A lot of people in the agile community place great importance on minimalism. An ideal minimalistic code base does no more than necessary by your application. So, if you end up reusing components that do a lot more than needed, the complexity of learning how to configure them for your needs ends up offsetting the productivity gains behind using the components. In that case, re-inventing a smaller simpler wheel becomes a viable option.

The key thing though is determining whether a required feature is complex enough that figuring out all its edge cases and usage scenarios would be worth the time to re-write from scratch vs simply reusing an existing component that gets the job done. That trade-off is the key determining factor.

Does the programming language used play any role in the matter? I'd say yes. For example, I rarely find this option of "re-inventing the wheel" viable in Java since writing components in it is very slow. On the other hand, dynamic languages like Ruby and Javascript make it so quick and easy to build a component from scratch (when writing tests first) that it often is easier to re-write smaller components than to figure out how to use 3rd party ones.

What is your opinion of the matter? Do you have examples where you reused a component vs other examples where you ended up writing your own?

Thursday, December 04, 2008

The Illusion of Delivery

Writing code without following certain professional practices often reminds me of a story I heard in Steven R Covey's book, The 7 Habits of Highly Effective People. The story was told to illustrate the difference between a leader and a manager. Here is roughly how it goes:

A group of people are cutting trees in the forest. One of them is a leader and another is a manager. The leader decides to climb a high tree at one point to see if they're making progress in the right direction. To his surprise, he finds out that they're actually cutting the wrong forest, so he yells at the top of his lungs "Wrong forest!!!" What does the manager say in response? "Shut up! We're making progress!"

This is pretty much what happens whenever a professional developer tries to remind a colleague of writing tests before writing code, pair-programming, or doing paper-prototyping on user-interfaces before implementing them.

When the colleague resists the suggestion, reasoning that it is more important to deliver than follow one of the suggested practices, would the resulting delivery be real progress? Or would it just be an illusion?

Let's look at test-driven development for an example. If a developer is delivering code without writing tests first, he may be delivering features "now," but if they have bugs that need to be fixed later with double the time at least that would have been spent writing the tests in the first place, was that real delivery or was it just an illusion?

What about skipping pair-programming and working solo? After having a few years of experience in pair-programming, I am completely aware now of how slower I am at delivering complex features whenever I work on them on my own. Often, such features are scrapped completely and redone after a review with someone. Was that real delivery or just an illusion?

How about delivering user-interfaces without doing any sort of conceptual design or paper-prototyping, let alone user testing? I've seen quite a number of user-interface screens developed and done only to be completely overhauled afterwards because they reflected the developer's mental model as opposed to the user's. Was that real delivery or just an illusion?

Writing tests first guides developers towards better code design, provides a safety harness that enables future refactoring of the code and maintenance by other developers without unexpected bugs popping up in other areas of the application, and keeps progress real and linear as opposed to illusory and unpredictable.

Pair-programming saves quite a bit of time by having developers sell each other on the benefits of certain designs before diving to implement them, results in transfer of knowledge between team members that makes them much stronger, and gets the benefit of unexpected synergistic solutions that often save hours of work. That's often much better delivery than having two programmers working independently all the time.

Finally, a little bit of time spent on designing user-interface paper-prototypes (10 minutes) and a bit more time spent testing the prototypes with real (or model) users (10 more minutes,) saves developers from so much time in rewriting bad user-interfaces or figuring out a reasonably usable user-interfaces for the actual users.

So, the next time you think about skipping one of the recommended professional practices, ask yourself: "Will I be delivering to the customer the most value in the long run if I skip this practice, or will I just be kidding myself into thinking that I'm delivering now when I'm hindering delivery in the long run and keeping myself in an illusion?"

One yardstick I use to evaluate whether my resistance to a suggested practice is legitimate or is just plain human resistance to change is to also ask myself "Am I only resisting it because I don't see how it can help the customer or is there a little bit of feeling of inconvenience to adopt it?" If there is even a nugget of feeling that I do not want to be inconvenienced by change, I give the practice another chance, keeping in mind that often it takes getting good at a certain practice before I can reap benefits from it.

Final Note: there are of course situations where some of the practices mentioned above may not be appropriate. For example, in a situation where a legacy app requires one extra feature and the team is not trained in TDD, it wouldn't make sense to apply it, but it does make sense to think about implementing that practice strategically in the future.