Previous Table of Contents Next


As we were working on a second version of the product, we started with an interface and had to decide if the new goals we supported and the new actions and objects we needed could be represented using the original metaphor. The first version of CNN@Work used a TV metaphor and allowed the user to change channels, change volume, adjust the picture, record information, and bring up the stock view. Figure 9.3 illustrates the original interface.


Figure 9.3  A window similar to this was used for the opening window in CNN@Work, Version 1.

The thought among the product team was that we should use the same metaphor for the main window of the new version of CNN@Work and display the availability of text stories. We brought in users to try out versions of this using paper prototypes, as well as very simple prototypes built on top of the original product. We were not successful in portraying this functionality to users. Users were certain that they could watch video using this application, but they were completely unaware that text stories were also available. Users had no idea they could set up any sort of individualized capture mechanism to save video stories that came across while they were away from their desk. They knew that they could do timed recording in the first version but had no idea that they could do recording based on the content of video stories. Most of the efforts to try to get this information across concentrated on menu items and on icons in the toolbar. Due to the limited space we had for the interface, this was a necessity. One goal supported by the original product was to watch the news live in the background while working on another task. This meant that the interface had to be reduced to a small size that still allowed users to see the video.

Given that capturing news stories and viewing text stories were high priority goals for the users, we felt that a drastic redesign was needed. We decided to abandon the TV metaphor and make text stories and capturing news the focus of the interface. As viewers had prioritized capturing news above watching TV we felt we needed to make sure they could easily find this type of functionality in the interface. Figure 9.4 shows a design for the main window which is similar to the final design.


Figure 9.4  Opening window for CNN@Work, Version 2 showing text story and capture capability.

We used several different ways in the final design to display that text stories were available and that news could be captured. We displayed the active channel and noted what type of information was available on that channel. Recall that corporations using CNN@Work could choose to have their own channels, and that CNN had several channels: HeadLine News, CNN, and CNN FN (the financial news channel). Of these, only HeadLine News would currently carry text stories. Local corporate channels might only have text announcements on them and occasionally might broadcast a speech by one of the corporate officers. We didn’t want users to be confused about which services were available on each channel. This also gave us the opportunity to display the different types of information available. We discovered during design testing that users were confused about what channel or channels filters would monitor. We needed to convey that filters only applied to the currently selected channel. We used the label “active channel” to convey this message to users.

We gave the users a window displaying story titles that were currently available. We could cache stories for a certain period of time before they would be replaced by new stories coming in. We presented the story names in a scrolling list, including the present video story and the next video story to play. Recall that since many of the actions for text stories were the same as those for video stories, we wanted to treat those objects in the same fashion. From the main window, users could view either kind of story (by double clicking or by selecting a story title and pressing a view button that was added to the final window design), set a filter based on a story title for either video or text, and save the story directly to the inbox. The one exception was the “upcoming video story”. We used the method of appending “next to play” and “playing” next to these story titles to alert the user to this exception, while still portraying that this list was a list of stories that were available for a limited amount of time.

We wanted to assist users in defining filters for capturing news stories as we found this was a high-priority goal. We found that finding news was easy for users to do currently. They scanned newspapers searching for keywords or for company names. They listened to news for the same types of keywords. They looked in certain sections of the paper or tuned into certain broadcasts that they knew would provide the information they needed. During initial design testing, we experimented with different ways of having users describe filters to capture the news and found (not surprisingly) that using Boolean expressions was a difficult task. In our case, the problem was more severe as the news stories were available for viewing and capturing only temporarily. News that is not captured is replaced by new stories, thus making it impossible for the user to tell what, if anything, he or she has missed. We needed to provide feedback on the number of stories transmitted during a given time. We put in a count of the number of stories that had appeared in the story window since the user had been logged in. Users could also view captured stories by the name of the filter that had captured this story. Thus, if a user found that several hundred stories had appeared and that his/her filter had failed to capture any stories, he/she might suspect that the filter needed to be revised.


Previous Table of Contents Next