Log in

Adam Goucher [entries|archive|friends|userinfo]
Adam Goucher

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

I'm Moving [Mar. 20th, 2006|08:42 pm]
Adam Goucher

Rather than pay both for hosting and the ability to modify templates etc on LiveJournal, I've installed WordPress on my new domain. So if you are visiting this site after Monday, March 20, 2006, click


For those of you who have it bookmarked/linked, the new address is http://adam.goucher.ca. All the content has been moved over, though some of the posts are pretty ugly as the LJ->WP conversion stripped some stuff.

And for those of you who are subscribed (all 10 of you as of this moment), I'll redirect the FeedBurner feed Wednesday, March 22, 2006 at some point in the evening to point to the new site.

link20 comments|post comment

Software Security [Mar. 16th, 2006|12:28 pm]
Adam Goucher

Rather than load up DI.fm upon first arriving in the office today, I listened to to a podcast with Gary McGraw who appears to be making the rounds to plug his new book.

Here are my take-aways:

  • Most security people come from networking background, not software ones. This gives the bad guys who are attacking your software an unfair advantage as they are software people.

  • Software people, through a lack of education, often think that security is a feature that can be added in, but security does not occur from 'magic crypto fairy dust'

  • Automated code analysis tools find bugs not flaws. People are the only things that can find those

  • Seven ways to make software more secure (this list I believe makes up the book he is flogging)

    1. Good code reviews; both automated and manual

    2. Perform architectural risk analysis

    3. Do software penetration testing

    4. White-box risk based security testing

    5. Abuse cases. If a developer says "A user won't ever do it", do it. Then giggle evilly

    6. Have explicit security requirements for your application

    7. Operational security. This is where the network security people and the software security people put everything together.

linkpost comment

Schedules [Mar. 15th, 2006|08:29 pm]
Adam Goucher

It has been my experience that one of the toughest things to manage about the testing phase of development is the creation and maintenance the schedule. One result of the test->fix->test process that invariably happens, testers are constantly pulled around the product. Test feature a, encounter bug that prevents forward movement, test feature b etc.. This pattern can, and often does, occur more than once a day. Especially early in the phase. Getting pulled back-and-forth like this can easily derail the order of a project schedule as people report working "about a day" or "until lunch" as the duration worked. Okay, these are extreme examples, but even "half-day" is pretty vague. Did they work a 10 hour day, or a 6 hour day? Most software shops have embraced the notion of flex-hours where as long as you get your scheduled commitments completed you can work whenever you want, so there is no way to really, REALLY know how long a task took. This knowledge is critical as new projects will use the previous project's actuals as the basis for this project's estimates.

So Adam, what do you suggest to try and rope in a test schedule? Glad you asked...

  • Measure in hours - Stop thinking in terms of "man days". Think instead in terms of hours and then roll things up later into days. Management is not going to care about task x taking 80 hours. What they want to know is how many days it will take (10; or 2 single person work weeks). Testers will then report their progress against tasks in hours. This is a more defined value of time and allows people to work on different tasks during the day but still provide a very clear measurement. Then for inputting into the project plan just multiply the reported value against the "standard" work day length. Or if you are reporting via a spreadsheet template, do the calculation right in there. Note: Once you start this, do not start to watch the total number of hours a person is logging in a week and force them to meet it. Only use that number if things are ridiculously out of whack one way or another. This is similar to not measuring your testers against the number of bugs they find in a set period.

  • Task independence - While not always possible, the schedule should be designed in such a way at each testing activity affects exactly one line item in the schedule. If a testing task can be applied to more than one item, how are you supposed to accurately measure the time taken? Apply the same time to each? Well, you just injected a fictitious time period into the mix. Allocate the amount? How do you decide what percentage accurately reflects the time taken. It could be easy to figure out in some cases, but might not be in others.

  • Task granularity - Most testing tasks can be broken into smaller components. These smaller components should be recorded as sub-tasks which can be recorded against. This allows for clear trending and task separation. Example: It might take me 5 hours to run the automated framework against server platform x. But what if 3.5 hours of that was environmental trying to get an instance of the LDAP server I am interested in running correctly. Without a high level of granularity we cannot see that at the end of the project I spent 6 days (once we aggregated the hours reported up) fighting unrelated software. And if we had a better process in place to handle it, then either we could have pulled the schedule in a bit, or I could have done more exploratory testing.

  • Sufficient detail - Yes, the project schedule is a "living document" and does not live in isolation. It should however be able to stand on its own tough. Each item should have a descriptive enough title allow the testing activity to commence without having to consult 2 or 3 documents to figure out what the heck the activity is supposed to be. Example:

    Bad - Upgraded Oracle LDAP Support

    Good - Upgraded Oracle LDAP (9.0.4) Support

    By just adding the version number to the task item, the value is increased significantly as you instantly can tell what the upgraded version is. Without it you need to find the version number by combing through the requirements and supported platform matrix. And both documents may be inaccurate as things change. As people are ultimately measured about how they do in the context of commitments in the schedule, it is the schedule that should be considered law.

By doing these things, the numbers at the end of the schedule will be much more realistic. Which in turn helps your estimates for the next project be more realistic. Which means less of a "crunch" at the end where you are working silly long hours. And noone really wants to work silly long hours (now that the dream of making a killing during an IPO myth has been quashed).
linkpost comment

Done and DONE-Done [Mar. 13th, 2006|06:28 pm]
Adam Goucher

This week's feature article on StickyMinds is a about completion requirements which is another thing I feel persons in QA should be involved in defining. Here are some check-lists to get you started on thinking along what done really means.


  • Are requirements of sufficient detail and clarity to be developed upon?

  • Are requirements of sufficient detail and clarity to be tested upon?

  • Is the origin of the requirement (potential customer, existing customer, market movement, etc) recorded?

  • Are all outstanding questions from within the organization regarding the requirement set been addressed?

Feature Development

  • Are new feature design documents accurate and check into source control?

  • Are existing feature design docs that were impacted by this feature updated and in source control?

  • Are the unit tests for this feature in source control?

  • Has the new feature been integrated into the automatic build process?

  • Have the unit tests been run by Test to verify their validity and worth? Unit tests that only check "perfect" data do not really do much to increase the quality of the code base. Likely the "perfect" condition will be one of the few the developer checks during integration testing

  • Is there a bug associated with all FIXME or related comments in the code? By definition, if a developer puts something like FIXME in their new feature code, it means that there is something to fix in there. Something to fix equals bug.

  • Is there no more TODO or related comments in the code? Again, by definition, if there is something left "TODO" in a feature, then the feature is not 100% complete.

Feature Testing

  • Is there a new Feature Test Plan checked into source control?

  • Are test plans of other affected features updated and checked into source control?

  • Were new features test cases developed and checked into source control?

  • Were existing test cases for other features updated as a result of this new feature?

  • Is there a clear mapping between the feature test cases and the feature requirements?

  • Have final test results been archived in source control?

  • Has the feature been added to the automated testing solution? or Has a plan been recorded in source control regarding how the new feature could/should be integrated into the automated testing solution

  • Do all modifications to the automated testing solution framework have associated unit test? Practice what you preach

  • Have you tested with non-ascii data? The software market is global these days and not everyone limits their character usage to the standard ascii set

Current Product Engineering

  • Have unit tests that verify the bug no long exists checked into source control?

  • Have both the bug fix and unit tests been brought into the other relevant trees?

  • see Feature Development items

This list is only what popped into my head on the train. There is likely (absolutely) more. Add yours below.
linkpost comment

Where are the Heroes of Quality? [Mar. 11th, 2006|05:15 pm]
Adam Goucher

This post has been bouncing around my head in a number of different formats for a couple days now. The gist of it is this. Who is there out there for people new to QA/Test to look up to and try to emulate? Or more importantly, to draw new blood into the field?

I don't think there is any person, or group of people, who have the name recognition to pull someone who is not already on the testing trajectory onto it. Programmers have Bill Gates, Larry Ellison, Linus Torvalds et al to look up to. Money and Power is sexy. Finance people have the likes of Warren Buffet to look towards. Again, super successful and as a result super moneyed. The Internet Bubble had a number of .com millionaire (or Billionaires in the context of Google). Outside of technology again you just have to look at how many kids want to be pro athletes. Money. Fame. Power. So how come there isn't such a person in the field of QA/Test?

  • Money (or lack thereof) - One thing most, if not all, household recognized names have in common is the size of their bank account. In the technology realm, most of the money is made when a company goes public or is acquired. By definition, the people at the ground floor get rewarded disproportionately than the rest. But when was the last time a company went public or was acquired which was started by a QA person? Again, almost by definition they are founded by a (group of) developer(s). So you are never going to look at the annual Forbes list of wealth and see someone with 'QA/Tester' beside their name. When picking a career, what are most people going to choose? The one that tops you out at 200k/year or one that could net you millions at IPO time? I remember wanting to learn how to write C in grade 3. If I had, who knows how different my finances would look now?</i>
  • Career Path - This is kinda Money part 2. What is the career path that someone who chooses QA/Test choosing for themselves. There are really two choices. Consult/Contract and ladder climbing. Consult/Contract is a separate bullet, so we'll concentrate on the ladder. How many CEOs of companies of recognizable size are there in the world that started in QA? Not one I would bet. C* positions tend to be filled from Finance or Operations. So realistically, the ladder tops out for someone climbing the ladder around Quality Czar. Which isn't necessarily a bad thing. But the pay cheque is missing a couple zeros in comparison.

  • Domain micro-brand - A lot of QA/Test people go on their own and hop from company to company once they reach a certain level on testing philosophy development. We'll call this the domain micro-brand. These people have visibility only in the testing domain which does not help attract new people in. If you look at successful people in business, they were not there for 3 months whipping a division into shape, but often years stamping it with their identity. I would guess that there are a number of reasons for reinventing yourself in the domain micro-brand model. Money and Career Path likely factor high in the list.

  • Domain immaturity - Lets face it, QA/Test (heck, programming) is in their infancy as far as a domain goes. Architects, Plumbers, Cobblers and Coopers etc all have hundreds of years to draw upon. We have, what, 50? Tops? Until the the field itself matures, the visibility of it will lag behind. So how do we do that? I'm not sure.

    • Some argue that we should all agree on a certain way of doing things and get a single body to certify people in it. Of course, there are those who argue that such an effort would not really do anything. In networking, the Gold Standard for awhile was the CCIE certification because to get it was damn hard. I'm not sure if it is still the same thing, but if you wanted someone who know knew networking you weeded all the resumes that did not have that. This not going to be resolved any time soon.

    • Part of the reason for not having certifications is that there is a lot of disagreement over what knowledge should be certified. I'm guessing though that there is a common subset that could be disseminated around. What we as QA/Testers should do is work on getting that subset developed to the point where it can be taught to students at the post-secondary level. Very few colleges or universities have courses dedicated to software engineering (version control, unit tests, design patterns etc) let alone on pure Testing Fundamentals. If we could get that taught to all 2nd or 3rd year computer science students we might get more to choose this route. More bodies == more visibility and the chance of finding a Testing Hero.

  • Test as Stepping Stone - All too often there is a high churn rate of people in QA/Test as it is often seen/used as a stepping stone into development proper. I can't tell you how many times people have expressed shock when I say that I want to be in QA/Test for the length of my career and not move into development at some point. This is what I enjoy, am good at, and passionate (hate that term) about. Type. Compile. Swear. Type. Compile. Swear. Thanks, but thats not a work cycle I want to live in.

  • Domain Fragmentation - Another working against us is how fragmented / specialized the Test world can be. Automation vs Manual. Exploratory vs Following a script. Tool A vs Tool b. Functional vs Load. The list goes on and on. Specialization creates more domain micro-brands, who start to consult/contract to get more money. See how all these build on each other?

  • Personality - Lets face it, most people who enter into technology are introverts. To develop a large enough profile to be a role model/draw for new people to the field will likely take some pretty meticulous and deliberate planning (and execution of course). That is likely to result from an extrovert who just happens to be in technology. Something that statistics appear to be stacked against

How do we get someone to that level? I hinted at it a bit earlier.

  • Start them Young - We need to seriously push our institutions of higher learning to include at least one course on testing to find those people who have the knack for it.

  • Certification - Even though there are very valid reasons not to push certification yet, I think we need to pursue it if for no other reason that it would give some level of credibility to the profession in the eyes of organizations.

  • Educate employers - If you work with someone with hiring responsibility, make sure that when they want a tester, that they do not structure their posting as "a developer who cares about quality". They won't get what they really want, a tester, and will then likely look at people who call themselves testers in a negative light.

  • Educate the greater technology community - It would nice to see columns on the advances in testing methods and methodologies in the major development/technology magazines. With added exposure, will come new thoughts and blood to the profession.

And who is there presently who could do this? I can think of three people who have the in-community clout to possibly get outside interest to testing. Of course, this is bias based upon my exposures to the greater testing community and no doubt is different from someone else's list. Which proves my point to some degree.
link1 comment|post comment

Watts Humphrey: Managing the Software Process - Chapter 7 [Mar. 10th, 2006|09:05 am]
Adam Goucher

Software Configuration Management - Part 1
Read more...Collapse )
linkpost comment

A smattering of random(ish) thoughts [Mar. 6th, 2006|10:05 pm]
Adam Goucher

  • From 90 Days of BzzAgent: "A small company can operate with organic and loosely-defined processes, a larger company has to bring more form and consistency to them". The context of this was in regards to HR processes, but this is also very (if not too much so) true for Quality initiatives. Often these are started too late or not given enough resources to achieve their potential. Especially if a company is the first one started by a group of developers (vs pure business people).

  • SourceForge now offers Subversion access on all project (well, if the project admin has enabled it that is). More on this Version Control Systems in a later post, but for now, here is the instructions for using it.

  • Let me say it again. The most effective way to shoot yourself in the foot when doing a process assessment is to not communicating the results within a short time span to all those who were involved. Not communicating results will be interpreted by people as that either management (who have the results) either does not care about the issues raised or is incompetent. Which of course may or may not be true. As soon as you have the results, publish them. Waiting 4 months after the assessment to let people know what the findings were means that momentum has been lost and next time an assessment takes place people are going to be less enthusiastic with their participation.

  • Often you see in the various QA/Test forums requests for questions to use in an interview. And usually someone posts a couple. Were I devious and looking to hit the QA interviewing circuit, that would be a great way to get "inside" information on how people interview for QA positions. That aside, I want to start a list of bad questions, and perhaps a better way to get similar, but better, insight on how the candidate thinks. Remember, you are hiring them for how they think; not how they look in a suit. I don't even wear one to interviews for this reason. If they won't hire me because I didn't wear a suit, I don't want to work for them.

    1. Bad: Why did you choose QA/Test
      Better: What about QA/Test makes you want to come to work everyday
      Why: Why did you choose X, be it QA/Test or Programming or Marketing etc are all canned answers and will get canned responses. The better varient will hopefully allow you detect whether someone is passionate about their chosen field or just doing it for the mortgage payment.

    2. Bad: Our code is written in X, can you you program X?
      Better: Do you have any programming experience? And if so, is it X? As I mentioned when I gave you the company overview, our product is in X.
      Why: Generally, the first programming language is the hardest to learn and a lot of the skills are transferable. Having exposure to programming concepts is far more important than knowing the language the product is written for. It is not as if the test group is going to be writing unit tests -- that is of course the domain of the programmers.

    3. Bad: What are 3 strengths and weaknesses? (you get double the Badness points if you then make a little T-chart labelled "Good" and "Bad".
      Better: What would you say are your strong and weak points?
      Why: First off, this is another canned question, so candidates can be tricky and have a canned response to it. If they are really smart they will know how to spin all the negatives into a positive. But if you must ask it, not forcing them to come up with a specific number of responses for each should (hopefully) elicit a better response. You should of course get at least one weak point.

linkpost comment

Clausewitz and Testing [Mar. 6th, 2006|09:01 pm]
Adam Goucher

Using (okay, blatantly borrowing -- quick, how smart is that from someone who is likely a lawyer?) Rob Robinson's presentation entitled Clausewitz and eDiscovery....

Clausewitz and Testing

Who is Clausewitz?
Carl Phillip Gottfried von Clausewitz (1780 - 1831), Prussian soldier and intellectual

Clausewitz came from a middle-class social background, though his family claimed noble origins and these claims eventually received official recognition. He served as a practical field soldier (with extensive combat experience against the armies of the Revolutionary France), as a staff officer with political/military responsibilities at the very center of the Prussian state, and as a prominent military educator.

Clausewitz first entered combat as a cadet at the age of 13, rose to the rank of Major-General at 38 , married into the high nobility, moved in rarefied intellectual circles in Berlin, and wrote a book which has become the most influential work on military philosophy in the Western world.

That book, On War (in the original German, Vom Kriege has been translated into virtually every major language and remains a living influence on modern strategists in many files. On War serves as the basis for many of today's modern principles of war.

What is war?
War is an act of violence (physical force) to compel our opponent to fulfil our will

Testing through the lens of "war"
Testing is the process of analysing, test execution, documentation of validation or verification defects and presentation of results of a software product to compel it to our required Quality levels.

Principles of War and their Application on Testing*

  • Principle of Objective
    War: Direct every military operation toward a clearly defined, decisive and attainable objective.
    Testing: Direct every testing process at a clearly defined, decisive and attainable objective.

  • Principle of Offensive
    War: Seize, retain, and exploit the initiative.
    Testing: Seize, retain and exploit the initiative in the analysis and test execution phases by pro actively taking advantage of advanced testing technologies.

  • Principle of Mass
    War: Mass the effects of overwhelming combat power at a decisive place and time.
    Testing: Mass the effects of all available testing capabilities to deliver results at the decisive place and time.

  • Principle of Economy of Force
    War: Employ all combat power available in the most effective way possible; allocate minimum essential combat power for secondary action.
    Testing: Employ all testing resources available in the most effective way possible; allocate minimum essential testing resources for secondary action.

  • Principle of Maneuver
    War: Place the enemy in a position of disadvantage through the flexible application of combat power.
    Testing: Place the product in a position of disadvantage through the flexible application of testing resources.

  • Principle of Unity of Command
    War: For every objective, seek unity of command and unity of effort
    Testing: For every analysis or execution objective, seek unity of command and unity of effort. If at all possible, centralize testing processes under the leadership of one entity and have clear lines of delegation to subordinates.

  • Principle of Security
    War: Never permit the enemy to acquire unexpected advantage
    Testing: Never permit the product to acquire unexpected advantage through feature creep and unmonitored late additions to the codebase

  • Principle of Surprise
    War: Strike the enemy at a time or place or in a manner for which he is unprepared.
    Testing: Strike the product in a manner for which it is unprepared.
  • Principle of Simplicity
    War: Prepare clear, uncomplicated plans and concise orders to ensure thorough understanding.
    Testing: Prepare clear, uncomplicated plans and processes for all testing activities and provide concise guidance to all those involved in test execution for to ensure a thorough understanding.

Gives new meaning to "Attacking a product".

*as defined by the U.S. Army Field Manual 100-5, 1994 (Unclassified)
linkpost comment

Documentation Reviews in distributed teams (or Get your Reviewers a Cool Photocopier) [Mar. 2nd, 2006|08:12 pm]
Adam Goucher

Reviewing documentation for accuracy and completeness is one of the necessary evils of testing a product. These days, documentation is typically delivered as a PDF rather than paper (and if print copies are even available they tend to be an extra cost). The problem with electronic formats is how to do you communicate issues/changes.

  • A bug with a long list of things to change

    • Pro: provides an easy checklist to change and verify against

    • Pro: don't have to read a 200 page document entirely on a computer screen

    • Con: these bugs can be large and ping-pong between Test and Doc as they change bits of the bug but not the whole thing

  • A bug with an electronically marked up version of the doc

    • Pro: provides quick context for all changes

    • Con: every tester must have the correct software to edit the file

    • Con: have to read a 200 page document entirely on a computer screen

  • A bug with a scanned copy of a a marked up version of the doc

    • Pro: provides quick context for all changes

    • Pro: don't have to read a 200 page document entirely on a computer screen

    • Con: need a Cool Scanner*

As the title sorta gives away, I'm a (currently) a fan of the last method. And it works great when teams are geographically dispersed. The list of changes format falls on it's face as soon as a section is inserted or removed and the page references are all out of kilter and the doc writer cannot come over to your desk to clarify things. Throw time zones into the mix and you have email ping-pong. The mark-up the original electronic form requires everyone to have the same version of the editing tool, and in some formats in-line editing might not be feasible or practical. Printing and marking-up the original copy retains the context on the comment as it is on/around the physical area in the document and by scanning what you are essentially doing is dropping the marked-up version on the doc writer's desk -- even if their desk is on the other side of the continent or ocean. And you can both have conversations about 'what did you mean by..' a month later and you, the original submitter, have a much better chance of remembering as you can open the scanned version up and be looking at the same thing.

* Our photocopier will scan a document and email it to you as a pdf. This is hella cool. I can print out the sections I care about of a document, scribble, cross-out and generally mark the heck out of it (with my red editing marker), then run it through the feeder and have something to attach to a bug submission. I only wish it could scan duplex through the feeder so I could print things out duplex. And yes, I recycle the marked-up copy which mitigates the 'but you are killing the innocent trees' argument to some degree)
link1 comment|post comment

Section 508 [Mar. 1st, 2006|05:37 pm]
Adam Goucher

As mentioned in the previous posting on Web Design Pitfalls, a lot of them could be lumped under the accessibility umbrella. That was a nice segue to this post which is all about accessibility and why it should be tested for.

First, the why? It used to be that if you designed your product to be accessible, then you were just doing so to be a good citizen and to reach out to a larger potential audience (which can great for creation of brand loyalty, I can't find the exact blog entry but I'm pretty sure it was by Seth Godin). Now however, there is an even larger reason to be accessible. That reason is Section 508 of the United States Rehabilitation Act which states:

When developing, procuring, maintaining, or using electronic and information technology, each Federal department or agency, including the United States Postal Service, shall ensure, unless an undue burden would be imposed on the department or agency, that the electronic and information technology allows, regardless of the type of medium of the technology

Given the context that the US government is the largest procurer of software and software services in the world, this is huge. Naturally, the devil is in the details and the phrase unless an undue burden would be imposed seems like the window for weasling out of this, but if your product is in a contract bid against one that is not, it appears that they are required to weigh in your favour. This advantage I think is one that should be pursued aggressively.

Just like L10n though, embarking on accessibilizing (yes, I am making up words here) your product is a major undertaking both in physical work and (re)training your developers on how to work. Unless every person who commits code, and develops requirements has accessibility in mind when doing their day-to-day work you will constantly have fire accessibility fires instead of preventing them entirely. The good news however is that the Rehabilitation Act has been around for awhile now so there is lots of information available on how to interpret it and suggested methods to implementing it. The best one I have found is the WebAIM Section 508 Checklist. Here is a direct copy/paste from it to illustrate it's comprehensiveness.

(p) When a timed response is required, the user shall be alerted and given sufficient time to indicate more time is required. The user has control over the timing of content changes. The user is required to react quickly, within limited time restraints.

Clear and to the point.

Some other Section 508 resources I have tucked away:

Oh, and a sigh of relief for some readers; you might notice that applications dealing with National Security are exempt form the provisions contained in the Act, so certain Authentication and Authorization product families are immune from this. For now.
linkpost comment

[ viewing | most recent entries ]
[ go | earlier ]