Previous Table of Contents Next


Once we’ve decided upon a particular design direction or metaphor, we need to work on the precise representation for the actions and objects (note that we’ve already factored in feasibility about these representations when selecting design possibilities) and we need to design the new user tasks and subtasks. We use the prioritization of goals to determine the amount of visibility and access for tasks and subtasks. We use the facilitators and obstacles in the users’ current environments to evaluate the new possibilities for tasks and subtasks. At times, automating tasks results in adding more obstacles than in the current user tasks. We must, therefore, make sure that users can see reduced obstacles in later tasks and that these reduced obstacles will produce a sufficient reason to use the product. As we’re designing these tasks and subtasks, we continually bring in users to test out our designs. We use a combination of high- and low-fidelity prototypes depending on the type of information we need to get froma user. If we’re only interested in whether they can find the path needed for doing a task, we will probably use paper screens with paper pull-down menus. If we need to evaluate task details, we will produce a high-fidelity prototype of at least that portion of the product.

We use general design guidelines and platform-style guidelines as design work progresses. We do not have corporate style guides for new products as most of these products have very novel interfaces. We do keep track of other products being developed and try to be consistent with designs already used for similar functionality. In some cases, we use this opportunity to learn from any earlier design mistakes.

As our design progresses, new tasks and hence, new actions and objects, are added. These must all be evaluated with respect to the goodness of the fit within the proposed design and with respect to the goals that they support. When tradeoffs need to be made, we use goal prioritization to help our user-centered decision making. The most important user goals must be streamlined at the cost of lesser goals. The Systematic Creativity framework allows us to easily check on prioritization of goals.

At the more detailed levels of design, actions are given initiators and feedback is defined to show the results of an action on an object. An initiator defines how users will perform an action. For example, a user could select a menu item, select an icon in a toolbar, or the action could be automatic. Does the same initiator work for all objects on which this action is performed? Feedback is needed to indicate to the user the results of the action. Is the same feedback appropriate for that action on all objects? Our framework is continually updated as new actions and objects are added. We can then look to see which actions are used on each object by simply sorting spreadsheets as needed. This helps us make decisions about initiators and feedback at a global level. That is, will changing an initiator for one action on one object work for all other objects on which this action is performed? We can ensure that the design takes into consideration sets of actions and sets of objects. This helps us design both terminology and visual feedback by taking sets of objects and actions into consideration.

5.3.1. Product Design for CNN@Work

The user goals included:

  Selectively filter text and video stories on keywords and text.
  Refine filters based on feedback.
  Obtain feedback on the number of stories captured (to get information useful in refining filters).
  View captured news and save stories in any filing system (the ability to export data).
  Share news with others in organization.
  Capture news that occurs at a particular time.
  Produce basic filters easily and quickly.
  View news in the background while doing other PC work.

Marketing goals included:

  Display that text stories are available.
  Give customers (not users) the ability to monitor the system.
  Customers have the ability to create an internal channel used for distribution of company talks, news, and training.
  Give users information about what types of information (text, audio, video) are available on each channel.

Engineering informed us that not all channels from CNN would have video, text, and audio information. Also, customers defining their own channels could choose which types of information would be broadcast. Therefore, we identified a new goal: users should be able to quickly determine what type of information would be broadcast on each channel.

We had identified actions and objects that needed to be represented in our interface. The objects identified, along with a subset of actions on them, included:

  Text stories — view, save, export, delete.
  Video stories — view, save, export, delete.
  Filters — view, create, modify, delete, activate, deactivate — but save.
  Channels — switch, view types of information available.


Previous Table of Contents Next