Previous | Table of Contents | Next |
2.1. DESIGN CONTEXT THE UI WAR ROOM
As a basis for the UI war room design, we used task analysis information in the form of interview write-ups. An overall user task analysis report described key commonalities and general information extracted from the individual interviews. This was a purposefully informal document, consisting of an introduction, a description of the subjects interviewed, what they are trying to do, how they are trying to do it, and what they would like (i.e., key user requests). The user task analysis report, in combination with the individual interview write-ups and our own understanding from talking to users, provided a foundation for design.
Our user population consisted of software developers working on telecommunications code in a proprietary programming language. The following characteristics are descriptive of this population:
Our design team consisted of eight software developers, a team leader, a documentation specialist, a senior user interface specialist and myself, a junior UI designer. Most of my design work in the war room was conducted alongside one of the teams software developers who had an interest in user interface design. Other team members played a role by contributing design ideas and making decisions that surrounded and contextualized the design occurring in the war room.
The team, as a whole, had relative freedom to pursue desired directions in both design and implementation. This meant that we had the luxury of working on high-level design for the interface as we wanted it to look some 2 years down the road, to give us a well-thought-out migration path. The product design can be characterized by:
2.2. USER REQUESTS
User requests had been parsed from the individual task analysis interview write-ups, as part of the user task analysis report. On each user request, we noted the rough number of developers who desired a particular capability (e.g., several developers asked for , all developers agreed ).
Examples of user requests are as follows:
Several developers indicated that they would like to see graphical representation of data structures, for both global and local data. They want to see tables, fields, pointers to other tables (type resolution), where those are initialized, and to what. They also want to see initial values of parameters and identifiers, and where/when they are assigned. For any given data store, they also want to be able to determine who reads it, who writes it, and where its allocated.
The majority of developers interviewed wanted to be able to print chunks of windows containing pertinent information. This hard copy could be used both in the design/development/debug process, but more importantly, in the design and development documentation, and eventually in training documentation.
We prioritized the user requests, using team consensus, and stuck them up on the left-hand wall of the war room. This was an informal process. We made the assumption that all of the key user requests summarized in the task analysis report were of equal value to users, and allowed the team to prioritize the items through discussion and voting. A more formal method may have ensured a closer match between user priorities and our priorities, but this method helped provide team focus and commitment, which are not to be taken lightly in a real-world context.
2.3. USER OBJECTS
On the back wall we taped user objects once again, one per sheet. These were objects that represented something concrete to a user though an object may or may not have had a real-world counterpart. Each user object was labeled according to users terminology (e.g., buddy, procedure variable, the switch). We tried to capture our knowledge about each user object on its sheet writing down comments, definitions, and functions that acted on the object. We would expand on the definition of objects by referring to individual interview write-ups, which would in turn identify new objects to be defined.
2.3.1. Recording the Origins of Objects
Next to each user object, we would write down where the object came from (e.g., from the user task analysis report, an individual interviewee, a previous release of software, or a design-team member). This was a successful way of capturing the relative validity of each objects existence in the interface, and the need for usability-testing of the concept with users (see Table 10.1). However, just because a user object had low validity didnt necessarily mean that it belonged any less in the interface. It meant only that the object would have to be more rigorously investigated with users, to make certain that it was significant, unambiguous, and comprehensible.
Previous | Table of Contents | Next |