Documenting Requirements — the Tools you will need to be successful.

November 2, 2010

I wrote previously about an approach that I have used to manage requirements.  Subsequently, I have been asked to elaborate on the tools and rationale, so here goes…

As previously discussed, “requirements” consist of a wide variety of inputs from numerous internal and external stake holders.   Yet all requirements suffer from 2 problems — (1) It is impossible to address all of your open requirements in a given software release, and (2) things change.

So, that very carefully defined requirement from that key customer that was documented in year 1 may be significantly different from what that same customer needs in year 2 (or year 3).  You obviously don’t want to spend time developing something that doesn’t meet a business need and you also don’t want to lose any of the knowledge about that requirement that you have collected from this customer (and others).  Lastly, if you are going to reconnect with a customer to review the details of a particular business need — you MUST have a good understanding of what they have said before — or they will dismiss you as a waste of their time — which is not helpful on many fronts.

In other words for every requirement, you need some method to gather, track, and maintain its “definition” and the features that you will need to address it over time.  A document management system (such as Alfresco) is an ideal tool for this purpose.

You may ask — “but why not use a source code control system like Subversion or CVS?”   The simple answer is “they are not the right tools”.   Source code control systems are designed to help multiple people work on source code files that often overlap with one another.   Thus, these systems have sophisticated tools for identifying the differences between two versions of the same source code file and allowing them to be merged together.  In my experience, these tools do not work with documents — especially Microsoft Word documents — and can even corrupt the formatting and content in some cases.  In addition, source code control systems typically require each user to maintain a local content repository that must be synchronized on a regular basis to ensure that each user is always looking at the most current copy of a document — which is counter-intuitive, at best, for most non-Engineering personnel.

By contrast, a document management system is designed for editing, sharing, and managing “documents”.  It uses a simple check-in and check-out model that allows users to view any accessible document but requires all updates to be done using an exclusive write approach.  A document management system also provides tools for managing the maturity of each document so that preliminary feature ideas can be shared with some people and more mature feature definitions can be shared with others.   (Have you ever had a situation where a very early version of an idea was sold to a new customer by an overly aggressive sales person?  If so, then you should appreciate the ability to control when requirement and feature information is shared with different audiences.)

In other words, a document management system is the best tool for “managing documents” whereas a source code control system is the best tool for “managing source code”.  (Apologies if this seems redundant.)

You also need a method to plan a software release that will (hopefully) include numerous features that each address one or more requirements.  You need a method to select which features will be included in each release and a way to obtain estimates from Engineering on what it will cost to develop each feature.  You also need the ability to iterate the definition of each release numerous times — adding or subtracting different features — trying to get the “perfect” balance between features, schedule, and Engineering capacity.

You also need to capture all of the assumptions and dependencies that Engineering identified while creating their estimates for 2 very important reasons:  (a) Engineering spent time (and money) creating these estimates and if a particular requirement isn’t addressed now, it may be later.  You don’t want to pay twice for the same thing. (b) If you know how each estimate was derived, you are much more able to verify previous estimates or anticipate problems with current estimates, based upon$ its assumptions and dependencies — even if your Engineering personnel change over time.

You could plan each software release using a spreadsheet — but why would you want to?   Do you really want to continually update the spreadsheet, circulate it to numerous internal people, and then consolidate all of their inputs?   And even if you want to operate this way — do you really have the time?  Or are you assuming that you will get each release planned perfectly the first time?

This is where an issue tracking system (such as Jira) can (help) keep you sane.  You can use a field (such as “Specification”) to include a link to the relevant requirements document within your document management system.  You can use another field (such as “Target Release”) to designate the features that you want Engineering to review and have them include their estimates and assumptions in other fields — based upon the information on the feature and its underlying requirement(s) maintained in your document management system.  So when you generate a report (or Excel export) for a given software release — you will be able to immediately determine whether you have under or over defined the release based upon your Engineering capacity.

You are almost done — all you need now is a “plan” for this release.   If you are using the waterfall method, I suggest creating a project plan with Microsoft Project using the feature, dependencies, and estimate information from your issue tracker.   Update it regularly (I suggest weekly) and provide copies to all team members.  (I suggest using a PDF print driver so that you can easily share the project plan with everyone without requiring them to have access to Microsoft Project.)

If you are using Agile, I suggest using the GreenHopper plug-in for Jira — or something similar — to plan each sprint.

That’s all for now.  I hope this has been helpful.   As always, comments or questions are welcome.

Advertisements

Mobile platforms and business — part 2

August 7, 2010

Sometimes you write a blog entry where your point is quickly understood.  And then there are those “others”…

My earlier post on mobile platforms and their use by businesses probably falls into the second category… But I still think the point is worth discussion, so here we go again…

Many companies issue smart phones or provide the back-end infrastructure to enable their employees to use their favorite Blackberry, iPhone, Android, or Windows Mobile device to manage their email, contacts and schedules.  I think we can all agree on this point.

However, this is just a small percentage of the business opportunities that exist for mobile devices.  Consider two examples:

  • I was recently visiting a friend in a major Boston-area hospital who was scheduled to have some surgery performed.  In the 30×30 foot room there were provisions for 6 patients — and each patient area included 2 laptops.  In addition, there were 3-4 laptops on mobile carts that were wheeled from one patient area to another to record data collected by medical professionals.  And on this floor, there were over 10 additional rooms just like this one.  In other words, a single floor in this hospital had approximately 160 laptops — and each laptop’s sole purpose was to provide access to the back-end servers.  In other words, the laptops were nothing more than terminals.  Imagine how many laptops must exist in the entire hospital.
  • Last month I had the “opportunity” to have my car repaired and spent some time chatting with one of the technicians.  He said that they spend several hours each week in e-learning courses which are often very frustrating to watch.  Apparently each video course is on a laptop in a room far away from the cars — so it is difficult to connect the instructions in the video with the relevant parts in the car.  He said it would be much easier if he could view the video while he was actually under the car.

These examples have a few things in common:  (1) Windows laptops are the primary device being used, (2) only a very small portion of the laptop’s capabilities are being used, and (3) the portability of the device is very important.

Thus I ask — What prevents a company from using an iPad (or similar device) instead of the much more expensive Windows laptops?

I think the answer is “infrastructure”.

Companies can only afford to support a certain number of devices, because each new device requires its own set of back-end support software, trained support personnel, spare parts, etc.  Thus, Windows laptops are often used in situations like this because the company already has many years experience supporting Windows computers, including laptops.

Thus, as the consumer market for mobile devices continues to get more and more crowded, I believe that the device manufacturers will include more capabilities designed to make their devices acceptable to business environments.   In fact, I don’t believe they have any choice.  Consumers are notoriously price-conscious and manufacturers must achieve a certain profit level to stay in business.   Therefore, the manufacturers will do more to make their mobile devices acceptable to businesses, but only a few of these manufacturers will be successful.

The key question then becomes “Which mobile devices will businesses embrace?”

Comments?


Which mobile platforms will businesses embrace?

July 31, 2010

Each day it seems like there are 10 new applications for your iPhone, iPad, Droid-phone, etc.  And as you might expect, Windows-based tablets, according to Steve Balmer of Microsoft in a recent CNET article, are due out later this year (2010) with a bigger push early next year.

Then there is the continued growth of e-readers from a variety of sources such as Amazon and Barnes and Noble, with some people projecting that they will replace paperback books in the near future.

And while each new platform may be attractive to one or more market segments of consumers — the real opportunity may be the level of adoption of a platform by businesses.

If you are a company like Airbus, Boeing, John Deere, Whirlpool or any other large manufacturing company, you have many thousands of employees and suppliers.  You depend on digital devices to communicate product design information and to facilitate the collaboration of your personnel — especially across geographies.   Today, the primary device is a personal computer, usually a Windows-based laptop.

But as we all know, laptops have their limitations and may be overkill in a number of situations.  For example, if you want to know whether a part is positioned correctly inside an airplane fuselage, you may want to refer to a digital image of the drawing (or 3-D model).  Trying to hold you laptop in one hand while you position the part with the other hand is awkward at best, especially if you have a large laptop.

What type of device is best (and most cost-effective) for this type of situation?

A laptop?  An iPad?  An e-reader?  Or perhaps just a Smartphone?   Many people would argue that “any of these might work, depending on the situation”.

However, since sharing digital data within a corporate environment requires secure high-speed communications, I believe the bigger question is “How many of these digital platforms will businesses decide to support?”

And given the cost of deploying internal networks with ever-increasing bandwidth and the costs of distributing and maintaining each platform — I believe that devices that can easily connect to existing internal networks and can quickly download content from existing servers through web-based clients are most likely to be adopted by companies.

In other words, I think that the corporate “hand-held device” market could easily be dominated by Microsoft — IF they are able to produce high-quality, high-performing, and easy-to-use products and get them to market in sufficient quantities in a timely manner.

What do you think?


Would you trust your cloud provider to provide anti-malware protection?

July 4, 2010

The popularity of cloud services continues to grow.  The key players are companies like Google, Amazon, and Microsoft who are each spending a ton of money improving and promoting their services.

For companies who do not have the ability (or desire) to build and support their own web-based applications, a cloud-based architecture offers many advantages, including:

  • Scalability
  • Integrated database services
  • User authorization and permission services
  • Pay for what you need

However, there are a number of risks with a cloud-based application, including:

  • Large, visible, popular systems can attract malware attacks from bad guys.
  • Proprietary services make it difficult to change cloud providers.
  • Little visibility into how the internal services actually work and even less control.

And now there are a number of advocates for anti-malware capabilities in the cloud, including Phil Wainewright’s recent column and John Viega’s latest book  “The Myths of Security: What the Computer Security Industry Doesn’t Want You to Know“.   Their argument is pretty simple — the threats are in the cloud, so that is where the threat protection should be.  Frankly, I think their argument makes a lot of sense.

However, if your company has acquired cloud services from one of the key players — THEY decide what services are available to you.  And given the history of these companies and their propensity to build their own capabilities, it is very likely that they would develop their own anti-malware tooling and integrate it into their cloud platforms.

Thus the key question — Would you trust your valuable data and the future potential of your company to a cloud platform provider who has little, if any, history in successfully protecting its users from malware attacks?

I think that this issue has the potential to severely impact the growth of cloud-based computing, but will probably be ignored until the first major breach of a cloud — then the lid will come off and the finger-pointing will begin.

What do you think?


Evolving from service-oriented s/w products to a common product platform

June 16, 2010

A potential client wanted advice on evolving their service-oriented software products to a common product platform. Their plan was to revise their application using a popular cloud computing toolkit, migrate all of their customers to this new product version, and retire their current offerings.  So, with all of the attention that cloud computing is getting these days, I thought I would share some of what we talked about and see what your thoughts are…

First, two quick definitions so that we can start on the same page:

Service-oriented products — You host customized versions of your software for each of your customers, typically using a single instance for each customer.  As a result, you may even have situations where different customers are using different versions of your software.   Since each customer’s data resides on a different system, the risk of co-mingling data between customers is low.

Common product platform — You host a single generic (or common) version of your software that is shared among multiple customers and configured to provide them with individualized capabilities and user interfaces.  You use enhanced security mechanisms to ensure that data from your customers is not co-mingled.  Once a user logs into your system, you use their login credentials to present them with a customer-specific user experience.

The key advantages of a service-oriented product approach are that (a) each customer gets exactly what they need, (b) data co-mingling between customers is nearly impossible, and (c) you can roll-out additional capabilities to selected customers very easily — which makes it easy to do specialized testing and to write contracts for individualized capabilities.

However, service-oriented products also have (a) increased maintenance, development, and upgrade requirements because everything must be applied to each implementation individually, (b) severe scalability issues as you get more and more customers — each with their own implementation, and (c) escalating cost issues because each new customer is a new installation without any of the economies of scale that normally come with commercial software.

Thus, it’s easy to see why many software companies start by offering their products as a service and then want to evolve to a common product platform over time.

However, this “evolution” also brings a number of key challenges, as follows:

  • Product Capabilities and Complexity — It is pretty easy to create a software product that can be customized.  The Development team removes the standard routine and plugs in the new, highly customized routine and Voila!, the product now does something significantly different.  “Plug-Ins” like this have been used for years in a wide variety of applications.  However, creating a framework that allows multiple plug-ins to be executed, based upon the user’s profile is much more challenging to architect, develop, test, and deploy.  And you also have to identify all of the places in the framework where a plug-in is necessary.   Lastly, since multiple customers will be sharing the same software instance, your framework has to be sufficiently robust to allow a plug-in for one user to fail gracefully without impacting any of the other users on the system.
  • Company Culture — If your company has been selling highly customized product versions for a while, you have an entire culture devoted to this concept.  So just imagine the conversation when a key sales rep wants “just one more” highly customized product version so that they can bring in a huge order near the end of your fiscal year.  And of course, how will your company handle a number of “just one more” situations?
  • Customer Migration — Even if the capabilities of your common product platform encompass all of the capabilities from all of your customer-specific product versions AND you have migrated all of your customers’ data into the new system, migration will still be an issue for your customers.  At the very least, they will have to retrain all of their users and their system administrators will have to understand how the platform has been configured for them.
  • Product Development Process — Every software product fails at some point in its life.  With a service-oriented product, the failure is normally limited to a single customer implementation.   With a common product platform, a failure will most likely affect multiple customers.  Thus, you may need to re-evaluate a number of aspects of your product development process to ensure that failures of this type are caught during the design, development, and testing of the system.  In some cases, you may need personnel who have different skills / experiences than your current staff so that you can prevent problems before they occur.
  • Product Architecture — Every “technology stack” that is used to develop and deploy a software application has a certain amount of capabilities and a number of limitations.  This is especially true with cloud computing platforms where many of the architecture elements of the system are provided by others and cannot be changed by you.   Thus, you need to verify that your common product platform can be implemented using the technology stack that you have selected (or your cloud computing partner is provided).  You should also cross-check your planned product roadmap against the technical limitations.  You don’t want to spend a ton of money (and time) building a common product platform using a new technology stack only to find out that you will reach a dead-end in 3 years — or just about the time you get all of your current customers happily (you hope) using your new platform.

In summary, evolving from a service-oriented software product to a common product platform can be very challenging.  However, the rewards are worthwhile and you can reduce much of your pain if you (a) take the time to create good quality plans, (b) consistently execute your plans, and (c) clearly communicate among all of the internal participants.

Finally, your customers are your most important asset.  So while you are developing your common product platform, you MUST continue to aggressively support their customized versions and help them see the benefits they will derive from the common product platform once it is available.  You don’t want to EVER announce that you are getting out of the customized software business in favor of a not-yet-available product platform and hope that your customers (and your competitors) will wait patiently for your to execute on your vision — because they won’t.

Just ask all of those customers who used to buy Prime minicomputers how “patient” they were when Prime announced they were going to minimize their hardware efforts in favor of software…


Do we finally have the ability to have true “international” products?

May 24, 2010

Did you see the recent article about Google making its real-time translation tool available for the Android phone?  If not, you can read it here.

While this implementation will undoubtedly be fun for a number of cell phone users, imagine the impact if a number of enterprise software companies incorporate this technology into their products.

We have long talked about “localized” software products that are available in a number of single languages.  In other words, users in France interact with the software in their native language — French.   We also talk about “internationalized” software products that display numbers and other elements using the format that is appropriate for each user — which is important when users from different locales are sharing the same application.

However, we have not yet seen applications where free-form text entered by one user in one language can be read and understood by another user in a different language.  Google’s tool offers the promise of this capability.

Granted, any “automatic translation” tool will have problems in accurately conveying the content, context, and subtleties of a language.  And I suspect this problem will be especially difficult when translating between languages that have different origins (for example, English and Japanese).   However, as more applications incorporate this type of technology, more investment (and resulting technology improvements) will follow.

Consider the following situations where this technology would literally “change the rules”:

  • PDM, PLM, ERP Systems — Product descriptions, instructions to suppliers, questions on design intent from part manufacturers, etc.  In today’s global environment where a supply chain may include companies in multiple locales, the ability for each member of your supply chain to communicate in their language could significantly improve your ability to design and produce quality parts.  It also makes it easier for you to find additional suppliers because they no longer have to communicate in your language.
  • Twitter, Blogs, and Other Social Media Tools — You could interact with customers and potential customers in their language.  This would allow you to review a product idea with potential customers in a wide variety of locales and thus avoid problems like giving your new product a name that means “toilet” in another language.  And wouldn’t you like to know whether a product is interesting to a particular locale before you pay to translate the product’s screens, documentation, marketing, and sales materials into that language?  I know I would.
  • CRM and Other Customer Feedback Tools — As with the social media tools, imagine how much your customer service might improve if you could actually exchange information with your current customers in their language.  Today, companies rely on small teams of customer support specialists with multiple language skills.  However, this approach can introduce another layer of “interpretation” between the customer who is describing the problem and the Development engineer who has to fix it — which increases the difficulty (and cost) of providing timely bug fixes.  How much could you improve your service and reduce your costs if reliable and accurate real-time language conversion was available?

There are many other categories of commercial (and even military) applications that could benefit from this technology.  What is your favorite?  Or perhaps this problem is too complicated to be solved by any technology, even one from the guru’s at Google.   What do you think?


Is a Product Manager responsible for preventing corporate identity theft?

May 19, 2010

As product managers, we are responsible for identifying and prioritizing the features for our products.  Then we spend time with Development to create new capabilities and with Marketing, Sales, and customers extolling the benefits that these capabilities will provide.

One of our key assumptions is that each new capability is valuable to some (and hopefully, all) of our customers and potential customers.

However, in today’s litigious environment, shouldn’t product managers also be concerned with protecting their own company?

In an enterprise software application, a wide variety of users perform various functions, including many that could expose their company to risks.  For example, new product designs are reviewed and approved in Product Lifecycle Management or PLM systems,  raw materials are acquired in ERP systems, and confidential legal matters are managed and reviewed in document management systems.

Each of these systems have one thing in common — it is possible for an “authorized user” to perform an action that could put their company in a compromising situation.  And if someone has stolen this user’s identity and a terrible situation results — do you think that they will attempt to hold the enterprise software vendor responsible?

Of course they will.

If today’s legal system allows the manufacturer to be held liable when someone falls off the top of one of their step ladders, it will be easy to hold the enterprise software company responsible when the strategy of a defense team in a high-profile legal matter is downloaded and released to the public by someone who guessed the lead counsel’s password.

And what about a situation where a new part in an automobile is approved by a well-meaning administrative assistant who misunderstood which part their boss said to approve.  When this approved part is later discovered to be faulty and results in the deaths of a number of people, don’t you think that the company will complain that it was “too easy” for the assistant to pretend to be her boss and that the software should have done a better job of detecting and preventing this action?   Unfortunately, the answer is probably “yes”.

Thus, I believe Product Management needs to expand the criteria that we use to evaluate potential product capabilities, as follows:

  1. Riskiness — Capabilities that the product absolutely must have to protect the viability of “our” company.
  2. Must Have — Capabilities that the product must have in order to be effective in the market place.
  3. Should Have — Capabilities that the product should have, but are not required.
  4. Nice to Have — Capabilities that we would like to have, but are not required.

This approach will result in more time being spent resolving “Risk” issues and less time on user-visible features that could generate revenue — which could also threaten the continued existence of the software company.  Oh joy!!   Yet another trade-off for Product Management to balance.

So, what do you think?  Does this topic ever come up in your product planning?  Or does your company depend on the fine print of your license agreements to protect itself?


%d bloggers like this: