Back Up Next

Chapter 8: Narrowing Your Focus to Analyze Specific Business Requirements

Certification Objectives. 3

Identifying Roles. 4

Identifying the Impact of the Solution on the Existing Security Environment. 6

Establishing Fault Tolerance. 7

From the Classroom... 8

Planning for Maintainability. 8

Planning the Distribution of a Security Database. 10

Establishing the Security Context. 11

Planning for Auditing. 11

Identifying Levels of Security Needed. 11

Analyzing Existing Mechanisms for Security Policies. 12

Transactions per Time Slice Considerations. 13

Bandwidth Considerations. 13

Capacity Considerations. 14

Interoperability with Existing Standards. 15

Peak versus Average Requirements. 15

Response-Time Expectations. 16

Existing Response-Time Characteristics. 16

Barriers to Performance. 17

Breadth of Application Distribution. 19

Method of Distribution. 19

Maintenance Expectations. 20

Maintenance Staff Considerations. 21

Impact of Third-Party Maintenance Agreements. 21

Handling Functionality Growth Requirements. 22

Hours of Operation. 22

Level of Availability. 23

Geographic Scope. 23

Impact of Downtime. 24

Target Audience. 24

Localization. 24

Accessibility. 25

Roaming Users. 25

Help Considerations. 25

Training Requirements. 25

Physical Environment Constraints. 26

Special-Needs Considerations. 26

Legacy Applications. 27

Format and Location of Existing Data. 27

Connectivity to Existing Applications. 28

Data Conversion. 28

Data Enhancement Requirements. 28

Legal Issues. 29

Current Business Practices. 29

Organization Structure. 29

Budget. 30

Implementation and Training Methodologies. 30

Quality Control Requirements. 31

Customer's Needs. 31

Growth of Audience. 32

Organization. 32

Data. 32

Cycle of Use. 33

Two-Minute Drill 34

Answers to Exercise 8-1. 36

Answers to Exercise 8-2. 36

 

Certification Objectives

·       Analyzing Security Requirements

·       Analyzing Performance Requirements

·       Analyzing Maintainability Requirements

·       Analyzing Extensibility Requirements

·       Analyzing Availability Requirements

·       Analyzing Human Factors Requirements

·       Analyzing Requirements for Integrating a Solution with Existing Applications

·       Analyzing Existing Business Methodologies and Limitations

·       Analyzing Scalability Requirements

In this chapter, we’ll take a closer look at the requirements of the business, customer, and end user. Though it may seem from the previous chapter that determining such requirements should be completed, and you’re ready to move onto designing your solution, there are specific needs that must be identified and analyzed more closely. In this chapter, we’ll narrow our focus, and look at certain requirements that need to be looked at closely in a project.

For any project you work on, you need to look at issues that will affect the project, and determine how these issues should be addressed and/or implemented in the analysis of requirements. These issues include such things as security, performance, maintainability, extensibility, availability, scalability, and requirements that deal with human factors. In addition, you must look at any existing methodologies used by the business, and any business limitations that can affect the project. Finally, as we’ll discuss in this chapter, you must identify any existing solutions that your solution must integrate with. When discussing such requirements for your solution with the customer, each of these elements must be addressed and analyzed. Even if the answer is “no, that isn’t an issue for this product,” it’s important to know this before designing and coding the application. It’s much easier to address such issues early, so you don’t have to add and remove features later in the project.

Analyzing Security Requirements

Security is always an important issue with any solution you design. It’s important that only authorized users are allowed to use your solution and access data, and that any data your solution uses is kept safe and secure. This can be done through any combination of user accounts, audit trails, existing security mechanisms, fault tolerance, regiments of backing up data, and other methods. Even if very little security is required in an application, determining the sufficient amount is as important as if you’d implemented heavy security features.

Identifying Roles

People who use applications fall into certain roles, which define the tasks users perform, the components they use, and the level of security needed. A role is a symbolic name used to group the users of a set of software components to determine who can access those components’ interfaces. In analyzing the requirements of an application, it is important to identify the roles people play in an organization, and which roles will apply to the solution you’ll create.

When identifying roles, you look at common groups that exist in an organization. For example, if you were creating a banking program, you would find that some of the common roles include managers and tellers. These would then be considered in designing the security of the application. Tellers might require access to features like depositing and withdrawing funds from accounts. However, the bank may not want tellers to have access to more advanced features in the solution, such as being able to freeze accounts and approve overdrafts. Such duties might fall on individuals fulfilling the role of manager. You would want to design the security of your application to take into account the roles of these people. People in the higher-level role of manager would be able to perform actions that aren’t allowed by lower-level roles, such as teller.

While roles are generally associated with groups of individuals, you may also have certain roles that are geared toward a single user. For example, as we’ll see next, you may have a person who’s in charge of administering the application. This person would set up user accounts for the others who will use the solution. Another example would be a user who’s assigned the role of doing regular backups on data, or who is responsible for maintaining user accounts. While these are still common tasks associated with the security of the application, and controls what certain users can and can’t do, these roles may be applied to only a single, solitary user.

Administrator Role

Administrators typically are users who have the highest level of security access to features in an application, and to the data that the solution uses. Whether you decide to control access through the application itself, through a server application (like SQL Server), or the operating system (NT Server), administrators are generally at the top of the authority chain. Common abilities associated with this role include the power to create and delete accounts, control the access of other users, and use any feature or functionality in the program.

Exam Watch: The power of administrators is best illustrated by one of the most common passwords administrators use for their user accounts: “GOD.” There is no role higher than that of administrator.

Administrators are in charge of the program and the security associated with that it. Certain features of programs are often unavailable to users in lower-level roles, and some programs allow the administrator to determine how the solution functions. Such functionality includes determining where the data will be accessed from, and the availability of features.

Because administrators have such an incredible amount of power, you should severely limit the number of people who are put into this role. You wouldn’t want to have every user of your application given administrator-level security, as it can cause serious problems, and considerable confusion. If you did, any user would be able to delete accounts, and control other users’ access. The administrator is the person in charge of the program, and in control of the access given to others. Generally, the person who oversees others who use the application will be given this level of security. This might include the head of a department, a manager, an assistant manager, or other people in higher-level organizational positions.

On the Job: It’s vital that a person who’s given administrator security is responsible, and realizes that they’re in charge of the security. It’s not unheard of for irresponsible administrators to increase the security access of their buddies, so they have complete freedom in the application. In addition, if too many people have administrator-level security, it will be impossible to track who is responsible for causing these and other security infractions.

Group Role

Often, you’ll find that there are a number of users who perform common activities, and need a shared level of access to application components and data. When this happens, these users can be put into a group, where they’ll have the same level of permissions or rights. For example, if you had a group of people who needed access to a SQL Server, you could create a group called SQLUSER with the same levels of access. Each of these users will use the same data, access the same components, and share the same level of responsibility. Creating a special group in which to put them saves you the problem of having to assign the same permissions to their individual user accounts. Each user is simply added to the group.

Guest Role

The guest role is generally given the lowest level of security. As its name implies, people who are simply visiting the application, and don’t require a high level of access, have the role of guest. An example of this would be a banking solution, where people accessing the solution through a guest account could see how many customers the bank has, the day’s interest rate, exchange rates, and so forth. They aren’t permitted to access sensitive information, or use many features or functionality associated with the program. Users who may not have a user account, and simply need to access low-level, unclassified material, use the guest role.

Client Role

Users in the client role perform most of the activities in a solution. These are the common users of an application, who often add and modify much of the data your solution will access. In the bank application example, the tellers would be a client of the bank solution. Other types of clients might be data entry clerks, or if you were creating an accounting program, the accountants working in that department. The permissions associated with these users allow them to use the general features and functionality of the solution, and access data that’s available to most of the people in that department or organization.

Exam Watch: Many times, students who are taking the Microsoft exam confuse the guest and client roles, thinking of clients as customers, who often fall into a guest role. It’s important to distinguish between these two roles. Think of guests as visitors, and clients as workers on the client machines accessing a server. Think of the roles in this way to help keep the two separate.

Exercise 8-1 will demonstrate identifying roles.

Exercise 8-1: Identifying Roles

Hammy Slam is a small company that makes imitation dead hamsters. They have over 50 models to choose from. They want you to design a solution that can be accessed over the company’s intranet, which is also connected to the Internet. Darren is the person who takes care of this network, while Julie and Jennifer are responsible for backing up data and waiting on customers, respectively. Both customers and employees will use this Web application. Customers will use the application to enter the name of a product, and see its availability and cost. Up to 20 employees of the company will use the application to access customer information, inclusive to a customer’s credit card information.

Administrators

Clients

Guests

 

 

 

 

 

  1. Based on the information given in the above scenario, determine who should be responsible for setting up accounts, applying security permissions to those accounts, creating new groups, and other administrative duties. Enter that name under the appropriate role in the table above.
  2. Determine what special groups will be needed for the solution. Enter the name of the special group in the table above, and who will be in the role below that.
  3. Determine who will access the lowest level of information, and will basically be visitors to the solutions capabilities. This role will have very few permissions attached to it. Enter them under the appropriate role in the table above.
  4. Determine which user accounts will use general functions of the application, and enter them under the appropriate role in the table above.
  5. Compare the table above to the completed one at the end of this chapter to see if your choices were correct.

Identifying the Impact of the Solution on the Existing Security Environment

In designing any solution that uses security features, you need to determine what impact your solution will have on the existing security environment. Basically, your solution will have one of three effects on existing security in the enterprise:

Enhance it
Diminish or cripple existing security
Have no effect whatsoever on the existing security

When identifying the impact of a solution on the existing security environment, you determine where the security features in your application will fall. In doing this, you look at each security feature in your solution individually, and then as a whole. While some features will have no effect, others may have a drastic impact on the overall security.

If your application sets permissions on specific user accounts, or sets rights on different folders on a hard disk, this may conflict with the security of the network operating system. For example, Windows NT Server controls the rights placed on folders on the server. If your application attempts to do the same, NT Server would stop the application from succeeding.. On other operating systems, such as Windows 95, if your application attempted to set permissions on a folder where file sharing is disabled, an error would result. You need to determine how such actions will work with the existing security that’s in place and controlled by the operating system.

If your application uses user accounts, it will act as another barrier against unauthorized users. Depending on how you set this up, and the type of application you’re designing, a user account and password could be passed to the application. This occurs with Internet applications, when Internet Explorer 5 automatically provides the user account and password. This means any user using that particular browser can access the application and data. NT networks can also check the user’s account to see if they have access to certain applications, files, and folders. Depending on the application, your security may diminish or have no effect on the current security environment.

Establishing Fault Tolerance

Fault tolerance is the ability of a solution to function regardless of whether a fault occurred. In short, it is non-stop availability of a solution. For example, let’s say you created an application that accessed a database on a particular server. If that server went down, your solution would become useless, since the database is inaccessible. However, if the server portion of your solution made regular backups of the database to another server, the client portion of your server could simply switch to the secondary database, and users could continue to work. A fault tolerant system keeps users working, despite any problems that occur.

One method of fault tolerance is the use of replicating databases. As we’ll see in the next section, replication allows all or part of a database to be copied to other machines on your network. Because each database is a replication of the other, users see a consistent copy of available data. If a server goes down, users are able to still access data. They simply connect to another copy of the database, and continue to work.

A common method of fault tolerance is the use of RAID (Redundant Array of Inexpensive Disks). There are several levels of RAID available with Windows NT, for example:

RAID 0, Disk Striping without parity. This spreads data across several hard disks, but isn’t fault tolerant. If one hard disk fails, all of the data is lost. This means that RAID 0 actually makes your system less fault tolerant. Rather than losing data on one hard disk when a fault occurs, you lose data on every disk in the striped set.
RAID 1, Mirroring. This creates an exact duplicate of data, by copying data from one partition to another (which should be on different hard disks). If you use a separate hard disk controller for each hard disk in a mirror set, it is called disk duplexing.
RAID 5, Disk Striping with parity. This spreads data across several hard disks and includes error-correction information. If any one disk fails, the operating system can use the error-correcting information from other disks in the set to restore the information. Once you replace the failed drive, RAID 5 allows the information to be restored to the new disk.

As you can see by this listing, only RAID 1 and RAID 5 are fault tolerant. If you lose data due to a hard disk failure, each of these can keep you from losing the data. You should note that different operating systems support different levels of RAID.  In addition, there are numerous hard disk manufacturers that make hard disks that support RAID.

As mentioned, RAID support is available through software such as Windows NT Server. There are also hardware-based RAID solutions. Hard drive controller cards are available that perform RAID functions. When hardware-based RAID is implemented, it reduces the workload on the processor, because the controller card deals with performing RAID functions. In addition, because RAID support is hardware-based, you can use RAID on operating systems that don’t support RAID. Even though Windows NT has software-based RAID support built into it, Microsoft still recommends using hardware- over software-based RAID due to the benefits of hardware-based RAID.

Exam Watch: Microsoft NT Server supports only levels 0,1, and 5 of RAID. While there are other levels of RAID—levels 0 through 10, and level 53—they won’t appear on the exam. You should know the three levels mentioned here, especially 1 and 5, which are fault tolerant.

From the Classroom

 Solution Requirements Analysis

One of the most important aspects of solution development is the proper analysis of the requirements of the solution itself. Sometimes developers and programmers are overly eager to begin programming a solution without giving enough thought to concepts like security requirements, solution extensibility, and scalability concerns.  Hopefully, developers will reach the point of giving real thought and analysis to performance considerations. Unfortunately, so many other areas of solution development and design offer numerous pitfalls for affecting the optimal functionality of a solution.

As the development of solutions shifts more into the Web-enabling, globalization, and distribution of applications, a corresponding degree of concern should be placed on the security requirements of solutions. Failure to respect the interaction of security mechanisms provided by the different players in a distributed application can wreak havoc on the ultimate reliability of the solution. Since most developers are usually not of the SE-type, a full working knowledge of the idiosyncrasies of ACL, NTFS, IIS, and MTS role-based security is usually not present. This increases the odds of opening the door to security loopholes. For this reason, care should be taken to coordinate efforts with those having responsibility for maintaining security on your network to help determine the ultimate security level of the solution.

—Michael Lane Thomas, MCSE+I, MCSD, MCT, MCP+SB, MSS, A+

Planning for Maintainability

No matter how well you design and develop an enterprise application, you can’t expect it to run forever without some sort of maintenance. Areas in which you’ll want to plan for maintaining the program include backing up data, and how user accounts will be maintained. If data is lost, and there isn’t an archived copy of the database, it will mean that a new database will need to be created and all of the data will need to be reentered. In addition, new accounts will need to be added for users, while people who have left the company will need to have their accounts removed. Planning for maintainability means looking at what the application or service will require in terms of future use.

An important area in maintainability planning is a regular routine of backing up data. It’s quite possible that at one point, the database could become corrupted, the hard disk storing the data could fail, or any number of other issues. If that data isn’t backed up, it’s quite possible, assuming there isn’t any fault tolerance in place, that the data will be lost.

There are a number of different options when it comes to backing up data:

Mirroring
Replication
Using backup software to store copies in a special format to other media. The terminology in SQL Server for this is dumping.

You can use one method, or several different methods to ensure there are several copies available for particularly sensitive data. Because of the importance of backups, we’ll discuss each of these methods individually.

A convenient method of backing up data is mirroring. Mirroring is something we discussed when talking about fault tolerance, and involves data being copied to another storage device. Each time a transaction is made to the database, changes are also written to a second copy residing on another hard disk. This means you have two identical copies of the database at any given time. For added protection, you should have the mirror copy of the database on a second server. If one server goes down, work isn’t disrupted because the second server’s database can be used. Many relational databases like SQL Server support mirroring, and once set up, your solution becomes virtually maintenance-free. Unfortunately, since every transaction requires writing once to each database, performance is diminished.

Replication is another method of backing up data that’s virtually maintenance-free once it’s set up. Replication has the entire database or portions of it copied to another server on the network. It is also an exceptional way of improving performance. Since the database is distributed across multiple servers, the workload can be balanced between the different servers. In other words, one user can make changes to a database on one server; another user can make changes on another server database; and a third can use another replicated database. The changes they make are replicated to each of the three databases. Each of these databases is an identical copy of the other, and because users are accessing the data through different servers, the workload of each server is less than if users were accessing data from a single server.

On the Job: Replicating data is generally thought of as a method of balancing workloads across multiple servers, and keeping that data consistent between them. Though not generally considered a method of backing up data, it is still a valid method of keeping an additional copy of your data available in case of problems.

Dumping is when data is copied to an external device. While you could perform backups to the same hard disk containing the data, this wouldn’t provide much safety if that hard disk crashed. Data is usually copied to external devices, which could be another hard disk, a tape drive, or other storage devices. When you dump a database, a snapshot of the entire database is made, and the data is copied in a special format to the external device. Any changes that have been made after the data is dumped to the external device is lost, unless the dump is supplemented with transaction logs. Dumping data is one of the most popular methods of creating backups.

Backups should be done regularly on data. How often depends on how often the data in the database changes. In companies where data changes as frequently as every hour, or even every few minutes, then you may want to consider backing up the data hourly, or every few hours. If very few changes occur each hour, then daily backups should be performed. If a problem occurs, then the backed-up database can be restored, and only a minimal amount of data that was entered since the last backup is potentially lost.

In addition to backing up the database, you should also back up any configuration or user files. This will allow users to have their personal preferences restored in case of a failure. User accounts should also be backed up, so that you don’t need to recreate an account for every group and user, and reenter information on every user of the system.

It’s important to have some sort of strategy in place to maintain user accounts. A security role should be established to deal with account maintenance. When new employees are hired, the person in this role can create an account with the proper permissions for that person, and add them to any groups they will need to join. When employees are promoted, they may need new permissions, or have to be added to different groups. It’s also important that when people are laid off or dismissed, their account is either disabled or deleted. This will keep them from reentering the system after they’ve left the company.

On the Job: It’s vital to remember to disable or remove a user’s account after they’ve left the organization. Many companies have found out the hard way, that disgruntled users who have been fired, laid off, or put into retirement can wreak havoc on a system. The former employee logs on remotely to the network or application, and from there may modify or remove data. The employee may even add false data, by creating bogus users or creating false shipment orders.

Planning the Distribution of a Security Database

When security is required for a database solution, getting the database from a developer’s computer to the computers where data will be accessed takes some planning. You need to do more than simply dumping the database on a server’s hard drive. Without planning for distribution of a security database, you may find unauthorized users getting into data that they should not be allowed to access.

The level of security you can apply to your database is often reflected in the software being used on the network. For example, Windows NT Server allows you to control access at the file and folder level. When your database server is on an NT Server, you can control the access to your database file, or the folder containing your database.

User group accounts will also need to be set up for that database server. For more information on the different security roles, refer to the previous section on this topic. You can control access through accounts set up on the server’s operating system (such as through User Manager in Windows NT Server), or you may be able to set up accounts through the DBMS. SQL Server enables you to create accounts and control access. However, this is dependent on the software, and the level of security available will be dependent on the type of software and database that are used and implemented for your solution.

When you think of a database application, you probably think of a single database residing on the hard disk of your computer, or some database or file server on the network. However, much of the data your solution accesses may be spread across multiple computers, and possibly on different networks. When the enterprise needs to have data on geographically dispersed computers, you may need to create multiple databases containing the same data on computers in different buildings, cities, or even countries. The databases will require data to be replicated between them, so that each database has the same data, or is able to find data residing in the other databases on the network. In addition to this, you will also need to set up security, to determine who may access each database.

When dealing with distributed or replicated databases, the database server on each network will need to have security permissions set up. Different accounts will need to be set up on each server, with the appropriate permissions applied to them. Generally, groups will be created as well, and individual accounts will be added to them. This controls who will be allowed to use the databases on each server. As mentioned, on Windows NT Servers, access can be set on a file and folder level. This means you can control access to the specific database file, or the folder containing the database. Replication must also be set up, so that changes to each database are replicated to the database on the other network.

Establishing the Security Context

The security context of a solution is made up of its security attributes or rules. This determines how a user is able to access, and what the user is able to do once access is achieved. It dictates how and when users are able to add, delete, and modify data, and use the features and functionality of an application. For example, when you log onto a computer, the password and your user account provide the security context for interaction with the operating system, and the applications and data on the computer or network. A personal identification number (PIN) would be entered by the user of a smart card, or a customer using an Automated Teller Machine. For a secure solution, you need to establish the attributes or rules that determine how and when users can access your solution and the data it uses.

Planning for Auditing

Auditing is where activities performed by certain or all users are documented to a file, allowing a user (such as the administrator) to see what these users have done. The record showing the operations a user performed is called an audit trail. The audit trail allows you to see what actions certain users perform, and determine where problems may exist with certain users.

Auditing can be effective in catching users who may pose a security risk. For example, let’s say you had doubts as to whether a certain user was acting responsibly in your application. You could set up auditing on that particular user, and chronicle the user’s actions. This would allow you to see when they logged on, what they did, and when they logged off. From this, you could determine whether the user was abusing their privileges, damaging the system, and basically doing things they weren’t supposed to.

Identifying Levels of Security Needed

Different organizations, and different forms of data, have varying security requirements. If you’re creating an application for a small business, to be used by a small group of users who are all highly trusted, the level of security required by the solution will be minimal. If you’re creating a solution for the military, where different users will require different clearance levels to access data, the level of security will be drastically higher. When discussing the requirements of your application with customers, you should try to determine what level of security will be required of the solution. Is there a big concern over unauthorized users gaining access? If the organization’s network is connected to the Internet, this may be of some concern. Will specific types of information need to have higher security? Which users will need to access certain data, and which ones shouldn’t? These are all concerns that need to be addressed early in the project.

It’s important to remember that security is a double-edged sword. Not enough security can leave your system open to all sorts of problems, while too much security will make your solution more of a pain to use than it may be worth. Every security feature you add to your solution is a blockade to keep unauthorized users out. However, as users move from having to enter passwords, wait for their access to be checked, and so on, your security features may be seen as overly restrictive and inhibiting to good and proper users. In determining the level of security required, it’s important to strike a balance between accessibility and security.

Analyzing Existing Mechanisms for Security Policies

It’s important to determine what existing mechanisms for security policies are already in place in an organization, before deciding on new ones that could conflict with or duplicate them. Imagine going through all the trouble of designing features for your solution that control access to folders or database files, only to discover that it will run on an NT server, which can already do this. Before determining what security mechanisms should be added to your application, you need to discuss with the customer what is currently in place.

Analyzing Performance Requirements

When a customer asks you to design a new solution, they want a quick and speedy program that obeys commands. When a command is given, there shouldn’t be a long wait for a response. Performance determines how well an application works under daily conditions. This means that despite increases in the number of users accessing data, the volume of transactions over a given period, or other issues, the application performs as it’s required in a speedy and efficient manner. The solution performs calculations and operations quickly, and the user sees the solution responding fast to their actions and requests.

If all users had incredibly fast processors, networks that transferred information faster than the speed of light, and unlimited hard drive space, then performance wouldn’t be a design issue. If every user used the same computer or network system, what you develop on your system would work on any system. Unfortunately, in the real world, companies have trouble keeping up with technology. Hard drive space is limited, networks are slow, and numerous users work on older, legacy equipment. Because of such considerations, it’s important to determine how elements such as these will affect your design. You must analyze the requirements of the business, discover which elements exist in the organization, and determine how they may affect performance of the solution. Once this is done, you can then determine what needs to be added to the design to deal with the performance issues.

On the Job: Many performance issues that arise once an application is built stem from the initial design of the solution. By not identifying and dealing with performance issues in the design of your solution, they often crop up later when the solution is developed. Dealing with performance issues once coding has begun is more costly and time consuming than in the design phase of the project. Therefore, it’s important that performance issues are addressed before the first line of code is even written.

There are basically two types of performance, and in determining the organization’s performance requirements, you’ll plan to deal with them through one of these ways:

·       Real performance

·       Perceived performance.

Real performance is what most people think of, or assume is occurring, when they look at how well an application performs. It is the way that your solution carries out operations, does calculations, and runs. With real performance, the code is optimized, and actions are carried out swiftly.

Perceived performance gives the user the impression that the actions are occurring quickly, when in fact they are not. For example, when loading Visual Basic or Visual C++, you may notice that a splash screen appears. This shows the name of the application loading, and covers up the fact that it’s taking some time for the program to load. Adding splash screens or other indicators to show that something is happening distracts the user from noticing that an activity is taking time to complete.

Transactions per Time Slice Considerations

One area that’s important to identify is the number of transactions expected to occur over a given period of time. A transaction is one or more separate actions that are grouped together, and executed as a single action. One of the easiest ways of thinking of a transaction is withdrawing money from a banking machine. When you withdraw money, the program needs to check to see if there is enough money in the account. If there is, the amount of your withdrawal is deducted from your account balance, and this amount of cash is delivered.. While the transaction is executed as a single action, there are actually multiple lines of code that need to be executed individually for the transaction to be completed.

In discussing the requirements of the business, you should find out how many transactions will occur over a given period of time. This will vary from one solution to another, but it is important to have this information so you can determine how many transactions your solution will deal with every second, hour, day, and so forth. For example, to use the bank machine example, bank machines in a small town may have only a few transactions every hour, while those in New York City could have thousands of transactions per hour. If you were creating the solution for New York City bank machines, you should consider designing it differently than the one in a small town to account for increased use.

Bandwidth Considerations

Bandwidth is the network’s capacity to carry information, and is measured in data transferred per unit of time, usually seconds. For example, in using the Internet, you may have noticed that your digital data is transferred in bits per second (bps). When data is sent over the network cable, it uses a certain amount of bandwidth. In other words, it uses up so much of the capacity that the wire is able to carry from one computer to another.

For every network, there is a limited amount of bandwidth that can be used. If the amount of data being sent over the network gets too high, the network slows down. Therefore, you need to determine if the organization has sufficient bandwidth to support the data being sent over the wire (i.e., network cable). If it has more than enough bandwidth, then it will be able to support the current and future needs for data transfer. If it has just enough or not enough bandwidth, then the organization will need to consider whether to upgrade to a faster method of transmission, or add more network cables to give users multiple paths for transferring data.

Exam Watch: To understand bandwidth, you may want to think of automobile traffic. When cars run on a city street with two lanes, there is only so much traffic that can be accommodated before you wind up with traffic jams. This gives you one of two options: make the cars go faster or add more lanes. This corresponds to networks, because when network traffic gets too high, the organization needs to consider whether a faster method of transmission is required (i.e., make the cars go faster), or more wires are needed to make up the network (i.e., add more lanes). Whichever method you choose will improve the performance.

The easiest way of determining the current bandwidth available on a network is to ask the network administrator. He or she should be able to tell you the current speed of the network, how it is laid out, and any other information that you require. In addition, you can find out about any changes that will be made before your solution is released.

There are two ways of estimating the amount of bandwidth needed. The first calculates bandwidth by the amount of bytes transferred by your server, while the second calculates by connections and file size. Which method you use depends on the data available to you. Performing these calculations will allow you to determine the bandwidth required, and what type of connection will be needed for the solution to perform well. For example, will an Internet connection require a 28,800 modem on the server, or will an ISDN or T3 connection be required? Before getting into these methods of estimating bandwidth, you need to know how bandwidth is measured.

Bandwidth is measured by the number of bits transferred per second (bps), but information you’ll see on files transferred is usually in bytes. Statistics on the server(s) your application will reside on will show information in bytes transferred, while the file sizes of documents on a hard disk are displayed in kilobytes and megabytes. Estimating bandwidth requires converting bytes to bits. There are 8 bits in a byte, plus an additional 4 bits for overhead data. For either of the bandwidth estimation methods covered next, you’ll need to use this information.

When you calculate bandwidth by the amount transferred by a server (e.g., NT Server, Internet Information Server, or Personal Web Server), you need to know how many bytes have been transferred over a certain number of hours. Once you do this, you use the following formula to convert this information into bits per second:

(Bytes x 12 bits) / (Hours x 3600 seconds)

A byte is 8 bits of data plus 4 bits of overhead, which equals 12. When you estimate bandwidth by bytes transferred, you’ll need to multiply the amount of bytes transferred by your Web server by 12. Because many servers record the amount of data transferred per hour, and bandwidth is per second, you’ll also need to convert the number of hours to seconds. To do this, multiply the number of hours by 3,600, which is 60 minutes in an hour multiplied by the 60 seconds in a minute (60 x 60). Having done this, you divide the number of bits by the number of seconds to get the bits per second.

The other method of estimating bandwidth is by using the number of connections and the average document size of what’s been transferred. This translates into the following formula:

(Average # of connections per day / 86400) * (Average document size in kilobytes * 12)

While this seems pretty imposing, it’s really not. The first part of the equation divides the average number of connections your site gets each day by the number of seconds per day (86,400). The second part of the equation divides the average file size by 12, which is the 8 bits of data and 4 bits of overhead. You multiply this to obtain your estimated bandwidth.

It just takes a few times of using these formulas to master them. Once you’ve calculated the bandwidth used by your site, you can determine what kind of connection you’ll need.

Capacity Considerations

Capacity is the volume or limit something can handle, and can have a direct effect on the performance of your solution. In the previous section, we discussed bandwidth considerations, which dealt with the capacity of data a type of network connection could handle. With bandwidth capacity, a method of transmission with high bandwidth has a high capacity for transmitting data, while low bandwidth would have a low capacity. The higher the bandwidth, the higher the capacity, meaning the greater amount of network traffic handled.

In addition to bandwidth capacity, it’s also important to determine how many computers and users your solution will service. The more computers on a network accessing your application, and subsequently the more users, the more performance can degrade. It’s important to know how many users and computers will be accessing your solution early, so your application can support the necessary number of users.

Besides your solution, you should try to determine if the server(s), on which any server-side components will reside, can handle the number of users you plan to support. Let’s say an NT server is licensed to allow 50 simultaneous connections, and you know that 75 users will be accessing data or using the server portion of your solution that will reside on that server. This means that unless more licenses are purchased, 25 users won’t be able to access your solution or its data.

Interoperability with Existing Standards

You should determine what standards currently exist in an organization when determining the performance requirements of your solution. Different organizations may utilize standards that must be adhered to if your application is to function properly and perform well. These are specifications for drivers, applications, networks, and so forth that dictate how certain elements of your application will interact with the existing system. If existing standards aren’t conformed to, your solution may not function well (or at all), and may cause problems in other components of the system.

Peak versus Average Requirements

It is vital that you determine both peak and average requirements for your solution. Peak values show the highest amount of activity your solution experiences at a given time. For example, when users on a network log on to your solution in the morning, it will experience a higher amount of use than any other time of the day. Average values show the common amount of activity you can expect your solution to have. For example, even though 100 users log on to your application at the beginning of the business day, an average of 50 users may be using your solution at any other time during the day.

Peak and average requirements apply to almost any area of a project. In analyzing the business requirements of a project, you’ll need to determine peak and average numbers of users, amounts of data storage and retrieval, and so forth. The peak and average values can then be applied to other areas of the project. For example, by determining when the peak and average times of activity are for your solution, you can determine when the best times are to perform maintenance. Rather than slowing down the network further by performing a backup, you can schedule maintenance for times when the average usage is low, and not at peak times.

Peak requirements can be determined by looking at the highest amount of activity that will occur at a given point. If an existing application is in use, you can look at the statistics of the server that your solution partially resides on, or by implementing auditing to see when people are using that server-side solution. For example, by looking at Performance Monitor in NT Server, you can see the top number of users accessing that server. The same can be done to determine the peak amounts of data transferred, or by looking at the statistics of the database, you can see the total amount of data transferred per day. By looking at this over a few days or a week, you can see the highest amounts of data or users. If this type of information isn’t available to you, the organization will generally have an idea of who will use the solution. By taking the total number of users who will use the system over a specific time period, such as an 8-hour work shift, you can identify the peak value.

Average requirements are similarly determined. By calculating the total number of users, data transferred, or whatever element of the system you want to determine averages for, you can divide this by a specific period. For example, if 50 users access data one day, 10 another, and 60 on the third day, this would mean that 120 users used the solution over three days. By dividing this by 3 (the number of days), you can see that the average number of users per day is 40 users. By dividing this by 8, for the number of hours in a workday, you can see that 5 users per hour is the average requirement.

Exercise 8-2: Determining Peak and Average Requirements

Universal Squeegee has offices in Detroit and Vancouver. These two offices are connected with a WAN line. They’ve asked you to design a solution that will be used at each location. There will be 300 users of the application, who will spend half their time using the solution. They will need to log on to the solution in the morning and at night, as well as log on and off at their lunch breaks, which is generally noon until 1 P.M. Modifications, additions, and deletions to the database will be replicated every 15 minutes over the WAN line, so that each office has relatively the same data at any given time. Each office is open from 9 to 5, Monday to Friday.

1. Based on the information given, determine the peak number of users who will access data.

2. Determine the average number of users who will access the system hourly.

3. Based on the peak and average use of the solution and network, determine when backups should be performed.

Response-Time Expectations

Response-time is the difference in time between the end of a query or command to when the results of that query or command first begin to appear on the screen. For example, let’s say a user made a query to a database server for all customers with the last name of “Carruthers.” The time from when this query is sent, to when the first names appear on the user’s screen is the response time. In other words, how long did it take for the computer to respond to my request?

Just as earlier we said that performance can be broken into real performance and perceived performance, there is real response time and perceived response time. Real response time is the actual time it takes for a server or solution to respond to a request. Perceived response time gives the impression that the response is actually faster than it really is. For example, let’s say that when the user queried for all users with the last name of “Carruthers,” there were a considerable number of records to look through, the network was busy, or other elements slowed the response time. If your solution put up a message box stating “Currently processing request,” this response would make the user feel that the query was being processed quickly. It would also tell the user that the query is being processed, rather than leaving them wondering what was happening.

Existing Response-Time Characteristics

In determining response-time expectations, it’s important to look at the existing characteristics affecting response times. In doing so, you can determine the characteristics of the system that is increasing response times beyond what is desired from the application. This is called latency.

Latency occurs when there is a longer wait than necessary or desired in response times. This can be an actual or perceived delay in response times. This can occur when data speeds are mismatched between the process and input/output devices, multithreading or prefetching is not being used, or there is inadequate data buffering. Prefetching is when your solution anticipates additional data or data requests, and multithreading is when multiple threads of execution are used to process a task. Data buffering is like a cache, where data is stored to improve the performance of various tasks being carried out. As we’ll see next, there are a number of other barriers in a system that can affect the response time of an application, as well as the solution’s overall performance.

Barriers to Performance

Despite how well your solution is designed, there are elements in any system that can keep your solution from performing as well as it should. Areas of a system that can serve as performance barriers include the following:

CPU
RAM
Storage
Operating systems and other programs your solution works with
Network bandwidth capacity
Network cards and modems
Poor coding and/or design

Each of these, alone or in combination, can cause performance to degrade. It’s important to identify what systems the solution will run on, so if necessary, the customer has the choice of upgrading systems, or accepting lower performance.

If a computer is slow and underpowered, then any application running on it will also be slow and inefficient. The server and workstations using your solution should have adequate processor, memory, and storage capabilities to support the application. If they don’t, you’ll find that performance suffers.

Any solution must work with other programs. After all, even a desktop application that doesn’t interact with other applications must work with the operating system. When it comes to distributed applications, your solution will need to work with the user’s operating system, network protocols, network operating systems like Windows NT or Novell NetWare, and other solutions with which the application may interact. If a system is using an older operating system, performance may be compromised. For example, if you were creating a solution for Windows 95, it could use 32-bit programming, while Windows 3.1 would have to use slower 16-bit programming. 

Earlier in this chapter we discussed how network bandwidth can affect performance. It’s important that the speed of the transmission devices and the media being used meet your solution requirements. If the bandwidth is too low, then network traffic problems can result. In addition to this, if the media is fast enough to carry the data but the network cards or modems being used are too slow, performance will suffer. In this case, a bottleneck will result. If you think of a bottle, the neck is narrow so less can travel through than if the neck were wider like the base. In a network, if the modems or network cards are slow, the data has to wait for previous information to flow into the computer. Bottlenecks can be a big problem, and network cards and modems should be of sufficient speed to handle the amount of data a computer is expected to receive.

Poor design is the biggest problem in performance, with bad coding running a close second. By looking at areas that can improve the performance of your solution, and coding the solution accordingly, many performance issues can be resolved. However, it’s important to remember that performance is often a matter of give and take. In a number of cases increasing performance in one area will result in decreasing performance in another. For example, when dealing with Forms (containers used to hold other objects in your solution), this tradeoff becomes apparent. If all Forms are loaded at startup, the application will run faster as it switches from one Form to another. However, when an application uses a lot of Forms, loading them all into memory can gobble up memory. A rule of thumb is determining which Forms will commonly be used during the design stage, and then having these Forms loaded into memory when the application starts. Less commonly used Forms (such as one used for configuring preferences, or “About this Program” Forms) should not be loaded at startup. In designing and coding your solution, you’ll need to know what areas need to be traded to improve performance in another.

Analyzing Maintainability Requirements

As mentioned earlier in this chapter, maintainability is an important issue when it comes to designing good applications. Maintainability deals with things such as regular backups and maintenance of accounts. It also deals with other issues that determine the upkeep of a solution, including how the solution is to be distributed, where it’s to be distributed, staff considerations, and issues that deal with development.

While we’ll discuss many of these in the sections that follow, it may seem that issues dealing with development should be handled much later in the project. How will source code versions be handled? What if another developer, other than the original author of the programming code, needs to make changes? How will that developer know what certain lines of code do? In designing an application, it is important to incorporate ways that allow the program to be maintained. Without maintainability, problems can arise when changes need to be made to the application.

Throughout development, the source code will undergo numerous changes. Unfortunately, some changes may cause problems, and developers may need to revert to a previous version of the source code. Therefore, how versioning of source code will be handled should be discussed early, and a decision made on how it will be handled. Visual SourceSafe (VSS) is one answer to dealing with different versions of source code. Your code can be saved to a database, and if the development team wishes to use a previous version of that code, they can restore it from the VSS database.

As mentioned throughout this book, it’s important to keep documentation on the different elements that make up your project. Source code is no different. An important aspect of making your source code maintainable regards documentation. This means writing down the objectives of your application, what aspects of your application do, creating flow charts, and so forth. Throughout the design process it is important to create such documentation, which can be utilized in later stages of design, and in the creation of Help documentation. Early in the project, it should be determined how documentation will be kept, and whether tools such as Visual Modeler or other tools in Visual Studio will be used.

To keep source code maintainable, developers should also be instructed to make comments in their code. Comments are a way of explaining what a procedure does, why a variable is included, or what particular sections or lines of code are there to do. Comments can be added to source code by using REM or by putting an apostrophe before a statement. Anything typed on a line that starts with REM, or anything on a line that is typed after an apostrophe, is ignored. When an apostrophe is used, anything after the apostrophe is ignored. The following shows two examples of comments:

‘ This is a comment
REM This is a comment as well

Adding comments allows you to remember why a line of code exists or the purpose of  a procedure. This is important because while a developer may know why code is being added at the time it’s done, it is less evident months or years down the road. It’s particularly useful to developers who are editing someone else’s code. By looking at the comments of the original programmer, the new programmer can understand why a particular function or statement has been included.

On the Job: The Y2K (Year 2000) issue shows the need for comments in source code. Many of the original programmers from that period are no longer in the business, or working for the company they worked for at the time the original source code was written. This means that the new programmers who are editing the code need to use the comments to determine the purpose of sections of code.

Finally, before moving to other issues, the final maintainability issue for development that we’ll discuss here is the use of a consistent naming convention. Consistent naming conventions allow programmers to quickly determine the data type of a variable, what the variable is for, its scope, and much more information. An example of a commonly used naming convention is the Hungarian Naming Convention. This has prefixes added to the names of controls and objects, which allow developers to recognize the purpose of those objects and controls. For example, cmd is the prefix for a command button, lbl for a label, and txt for a textbox. By adding this before the name of an object or control, such as lblAddress and txtAddress for the label and textbox of an address field, developers are better able to understand the type of object or control being used.

Breadth of Application Distribution

In determining maintainability requirements, you should identify the breadth, or scale, to which your application will be distributed. Will it be distributed to a single department in the organization, or marketed on the World Wide Web and on shelves to hundreds of thousands or perhaps millions of users? By discussing the scale of the application’s distribution with the customer, you can determine the best method to distribute the solution.

Another important reason to identify the breadth of application distribution is to help set the vision for the application. If the customer wants a world-class solution, it will take longer to design and develop, and may need to compete with other solutions currently on the market. If the solution were a word processor, you would need to look at existing solutions with which yours will compete. On the other hand, if the solution was to be exclusively used by the business, it may not need to be of the caliber of large and expensive solutions already on the market. They may want something smaller, more geared to the organization’s individual needs, which would take less time to develop and have no competition.

Method of Distribution

There are a number of methods of distribution that you can use to get your solution to the customer:

Floppy disk
CD-ROM
Network or Internet distribution

Each of these has its own unique benefits and drawbacks. In many cases, you’ll want to consider multiple distribution methods so that everyone who needs the product can actually obtain and install a copy of it.

Distributing solutions on floppy disk is the oldest method of getting applications to an end user. Although CD-ROMs have replaced the popularity of this method, it is still valuable, and should be considered as a secondary method of distribution. Most new computers come with CD-ROM drives already installed, and many older computers have been upgraded with CD-ROM drives. Despite this, there are still quite a few computers out there that have used up their connectors with hard disks, and don’t have any more room for a CD-ROM drive. In addition, you may experience companies that can’t afford to install one on every single computer in the organization. Because they have networks, the cost of doing so may be considered unjustified.

A drawback to using floppy disks is that the size of most applications today require a significant number of floppies to install a program. Users may spend considerable time swapping one disk for another in the process of installing the application. However, if a computer isn’t connected to a network or the Internet and doesn’t have a CD-ROM drive, installing from a floppy disk is the user’s only option.

As mentioned, CD-ROMs are the most popular method today of application distribution. Although CD Burners, also known as Writable CD-ROMs, have dropped in price, they’re still around the price of a large hard disk. For smaller software development companies and independent developers, this can be hard on the pocketbook. However, the benefits drastically outweigh the initial cost of distributing solutions in this manner. Most users have come to expect programs to be available on CD, and enjoy the speed this method provides for installing solutions.

Distributing a solution through a company’s LAN is an easy way of getting your solution to the user. A folder (i.e., directory, for DOS and UNIX users out there) is given the proper permissions to allow access to those installing the solution. They can then double-click on the setup program, and begin installing the solution over the network. The problem with this method is that if the solution is rather large, it can use up a significant amount of bandwidth and slow down the network. If this is the case, you may want to consider limiting the hours of availability to this folder. If not, you could also set up such distribution servers on different segments of the network. This would allow users to install the solution from a server that’s in their department, or close to their location.

The popularity of the Internet has made it a good medium for distributing applications. Users with an Internet connection and the proper permissions, if required, can access the installation files for your solution. If only authorized users were to have access to these files, you could set up security on the Web site to control access. This would be done through the Web server and its operating system, such as Internet Information Server on Windows NT.

Maintenance Expectations

It’s important to understand what the customer expects in terms of maintenance. This will aid in determining what will need to be put into place to support the solution, and any future versions that may be created. Because the business and customer should be included in the design process, it is important to understand their views on how maintainability should be implemented through the project.

Maintenance Staff Considerations

The maintenance staff is the group of people who will support and maintain the solution once it’s been implemented. This doesn’t mean that they have anything to do with maintaining source code—that’s in the realm of the Development team. Maintenance is responsible for maintaining regular backups of data, and may be responsible for such things as the maintenance of user accounts. However, user accounts are often controlled by network administrators, application administrators, or users who are given special permissions to help control accounts. In many cases, this is the same for backups, where a user is given special permissions so that they can back up data to an external device.

For particularly large organizations, where a considerable amount of data needs to be backed up and restored, a special staff of people may be responsible for these activities. In other organizations, maintenance may fall to a support staff that acts as help desk, computer repair, and other roles. Still other organizations contract outside workers, who come in when needed or only part-time to deal with maintenance tasks. Because of this diversity of who may maintain the product, it’s important to determine where you’ll need to pass on information about your product.

Location of Maintenance Staff

The location of the maintenance staff will have an effect on the design of the application. While location may not be an issue if they’re located at the other end of a building or campus, it may be a serious issue if they’re located at another office of the organization, perhaps even in another city or country. For example, if the maintenance staff were located in another country, and the offices were connected with a WAN line, you would want to consider replicating the data to the location of the maintenance staff. The staff could then perform backups on their end. If you didn’t want to set up data replication, you would have this staff perform backups across the WAN line, during times when activity is low. This would keep the WAN from being bogged down during backups.

Knowledge Level of Maintenance Staff

It is important to determine the knowledge level of the maintenance staff so that they can properly maintain your solution. If they are unfamiliar with backup devices that will be used, or other elements applying to the maintenance of your solution, then training will need to be developed to upgrade their knowledge in these areas.

Impact of Third-Party Maintenance Agreements

Third-party maintenance agreements are contracts with companies outside the organization. This may be a person who comes in part-time to maintain the system, or a company that backs up data remotely. Regardless, it’s important to determine if third parties are involved, so they can be informed about how to maintain the system.

Analyzing Extensibility Requirements

Extensibility is the ability of an application to go beyond its original design. By incorporating extensibility into the design of your application, the solution is able to extend its capabilities, and provide additional functionality. A good example of extensibility is Visual Basic 6.0. Visual Basic 6.0 has an Add-In Manager, which allows you to add components called Add-Ins that extend the capability of the development environment.

In discussing the requirements of the application with the customer, you should try to determine whether extensibility is an issue, and if so, what areas of the application may need extended features. This allows your solution’s functionality to grow with the needs of the customer, without necessarily needing to create a new version of the product.

Handling Functionality Growth Requirements

By incorporating extensibility into your design, you are able to benefit from other third-party programmers developing added features to your application. In addition, you are able to allow an added method of upgrading your application. Rather than having to create a complete upgrade with new features for a product, the features can be added through something like an Add-in Manager that’s available as a menu item.

Analyzing Availability Requirements

In determining the requirements of a business, it is important to determine the availability required from the solution. Availability is the ability of users to access and use the solution. The importance of this is simple: if the business can’t use the functionality of the solution, then they can’t benefit from it. For mission-critical solutions, where the business depends on the solution to be up and running for the company to do business, it is absolutely vital that your solution work. This means that despite any failures, the solution should be available.

Be sure to familiarize yourself with the checklist of availability requirements, explained in the sections that follow:

Checklist 8-1 Analyzing Availability Requirements

Hours of operation When your solution will be used.
Level of availability User access control
Geographic scope Location of servers and users
Impact of downtime Importance of data availability

Hours of Operation

Different companies have different hours of operation, which will determine when your solution will be used. If an organization isn’t open after a certain time of day or during certain days of the week, the availability to data access during these times may be affected. The business may not want to have data available, to keep problems from occurring due to security breaches, or for other reasons. Sociological and religious beliefs may also come into play with this, if the customer has a conviction that employees aren’t to work on specific days or holidays.

The hours of operation for external organizations will also have an effect on the availability of data. For example, if your solution accessed stock market data, the data available to be accessed would be dependent on when the stock exchanges were open. Due to different time zones, the Toronto Stock Exchange or New York Stock Exchange would close at different times than those in Tokyo, London, or other countries.

Determining the hours of operation can affect other areas of the project. This information can be applied to when the solution’s data can or should be backed up. It also determines the times when your solution needs to function without problems, as well as the applications your solution interacts with.

Level of Availability

In addition to the when of availability, you should determine how your solution and its data should be available. At the beginning of this chapter, we discussed how different security roles can be applied to control access to data. This was used to control which users would be able to access the data and how. This applied to access at any given time. However, it’s also important to determine if this access should be restricted or controlled differently during different times.

The level of availability may need to be different during specific hours or days of the week. For example, a real estate board may have you design an online solution where real estate agents can enter information on new listings they have for sale, or have sold. Because the board’s offices are open only from 9-5, Monday to Friday, and the data must be approved before being posted, your solution may allow information on houses to be viewed at any time, but adding and modifying data would be restricted outside of those hours. This means that the level of availability would need to be controlled through your solution. Different features and functionality in your solution would be limited during certain periods. While some features work, others would not.

Like information on hours of operation, this information could be applied to other areas of your project. You could determine that when data is being backed up nightly, or other system events are run, users would have limited or no ability to access data. The level of availability could also affect how users would interact with other solutions. If a certain SQL Server is turned off at specific times, you could have your solution access data on a different database server. By determining the level of availability, you determine how your solution is used at specified times.

Geographic Scope

The geographic scope of the network will have an effect on how you design and develop your solution. Do users access servers in the same room that they’re in, or are servers located on other floors of the building, or across a campus? If the network is larger than that, are servers on the other end of the city or country, or in different cities or countries? If it is a small network, geographic scope won’t be as problematic, because users need to communicate only over a small distance. This means the computers will probably be able to use current protocols, and performance won’t be affected (or affected much) by your solution being added to the current system. However, if computers are spread out over a large network, new protocols may need to be used to communicate with other systems, and traffic may increase due to users having to access data across large areas. The geographic scope can affect how your solution is designed, and the technologies that will be used in it.

Networks are not always limited to a number of computers in a single room or building. The Internet is a good example of that. As a global network, it also has the distinction of being the largest network on the planet. Even if your solution doesn’t use the Internet, computers on your network may still be spread across multiple floors of a building, a campus, the city, or different countries. As such, the geographic scope could have an effect on how data is accessed, and from where.

If you are dealing with a large network, you should consider having data available on multiple servers. Changes to the database could then be replicated. With allowances for the time between data replication, this would allow each server to have the same data available. This decreases the workload of the one server. If users are given access to the server closest to them, it can decrease the amount of traffic on the network, because users won’t have to go through a large section of the network to reach data. For example, let’s say you had two offices in different cities connected by a WAN line. If your database resided on a single server, users would have to tie up the slow WAN line to access data. By having the same data available on a server at each office, and replication taking place over the WAN, users would need to connect only to the server at their location. Having users access data from the server that’s closest to them results in substantially improved performance.

Impact of Downtime

Different organizations will experience downtime differently, depending on the importance of the solution that becomes unavailable. For example, a solution that’s used for 911 calls would be extremely detrimental, while a personal information management solution—used to keep track of the boss’s social calendar—may be seen as more of an inconvenience. Generally, when a solution doesn’t function—for example, due to the system crashing or a lack of fault tolerance—it will affect the business in some way. If a video store’s rental program goes down, they can’t rent videos and they can’t make money. The same goes for any other program used for sales or customer service. Therefore, it’s important to determine what impact downtime will have on an organization.

Analyzing Human Factors Requirements

With any solution you design, you should never forget the people who will actually use your application. You’re not analyzing the requirements and designing a solution for the models, lines of code, bits, and bytes that make up the application. You’re doing this for the human beings who will buy and use the solution. Because of this, any well-designed application begins with analyzing the factors that will affect these users.

Target Audience

Before you begin designing and developing your solution, it’s important to understand for whom you’re creating the solution. Who will be the users of your solution? This will be your target audience, and by understanding them, you’ll be better able to create a good solution for them.

In defining your target audience, you should discuss with the customer which department, unit, or other entity the solution will serve. Will sales, accounting, marketing, or some other group of end users use the solution predominantly or exclusively? By determining who the end user will actually be, you can then obtain information from them, and discover what unique needs they expect to have addressed by your solution.

Localization

If your application will be used in different areas of the world or areas where different languages are used, your design will need to implement localization. Localization refers to adapting applications to international markets. This involves changing the user interface and other features so that people of other nationalities can use them. When localization is used, the design implements different character sets, language files, and so on. Features can be included in the application to allow users to change from one language or character set to another on the fly, allowing the same solution to be used, but different files to be accessed.

Accessibility

Accessibility refers to the ability for everyone, despite any disabilities they may have, to be able to use your solution. You should determine through the customer whether any users have disabilities that will require accessibility features. If not, you may decide not to implement accessibility features in your solution. If there are users with disabilities in the organization who will use the solution, or if you’re creating a mass-market solution that will be sold off the shelf, you should consider incorporating such features into your design.

To provide accessibility for such users, you will need to implement features that allow users with certain disabilities to access your application. This may include using the following:

Keyboard layouts for people with one hand or who use a wand
Services for people who are deaf or hard of hearing
Features that provide audio instructions or other features for the blind or visually impaired
Services for people who have motion disabilities

The accessibility features you implement will allow these users to effectively use the solution.

Roaming Users

Roaming users are users who access your solution from more than one workstation, and thereby require having their preferences available from more than one computer. Such users may need to move from one workstation to another in the organization, or may connect to your solution remotely through a computer they don’t normally use. By discussing whether roaming users will be an issue for the program, you can implement profiles or features that address such user needs.

Help Considerations

Help should always be available to users, so they can find an answer to any problems or difficulties they experience with a solution. This includes online user assistance, such as help files, context-sensitive help, Web pages with answers to frequently asked questions or problems, and other methods of assistance. Product documentation and user manuals should be available through the solution, or as printed material. In addition, you may also want to consider implementing a help desk or support staff, who can assist users in person, over the phone, or over the network or Internet.

Training Requirements

You should determine what training users would need to use the solution. Depending on the solution and the previous experience and education of the users, training needs may range from minor to intensive. You need to discover through the business or customer whether users have experience with previous versions of the product, similar solutions, or have little or no experience with solutions of this type (or perhaps with computers in general). Determining the level of training required will impact the training methods you implement in later stages of the project.

Training can be in a classroom setting. This may be in the form of a short seminar for upgraded solutions, or intensive training sessions that may take significantly longer. Online training can also be used, allowing users to access training through their computer. A Web site can be set up with Web pages that take the user through the features of the application, and how to use them. Streaming video or video files that users can download and play through a viewer can also be used. This allows users to see an instructor or some other presentation over the Internet, network, or by loading a video file that can be played on the computer.

Physical Environment Constraints

In discussing requirements for the solution, you should discern whether there are any physical environment issues that may affect the project. The physical environment is an actual facility or location, and can include where your solution will be used, the training area, and the facility where development will take place. As we’ll see, when it comes to physical environments, there are a number of elements that may affect your project.

The physical environment will affect technologies used in your design, and how it will be implemented. For example, if you were designing a solution for an organization with buildings on either side of a major street, you wouldn’t be able to rely on a network cable being run from one office to another. If underground cabling wasn’t an option, you may have to add features to your solution that include such technologies as infrared devices, the Internet, or other technologies that would enable data to be transmitted from one area to another. If a larger expanse separated two areas, features that worked with microwave transmitters, the Internet, or other technologies might be used. As you can see, such networking issues would affect what you might include in the solution or service.

Another issue is whether the physical environment can support the people who will create, be trained on, and use your solution. It’s important that your project team have facilities they can properly work in, with the room and resources they need to complete the project. Imagine a room that can’t hold the number of people in your team, or Development finding there are no computers. Training facilities should also have enough computers to support users who will be in training sessions, and enough room so they can work. You should attempt to have wheelchair-accessible facilities, so that anyone who’s physically disabled can reach the room where training is being held. While this is an issue of special-needs considerations, it crosses over into the realm of physical environment.

Special-Needs Considerations

As mentioned earlier, certain users with disabilities should be considered when addressing human factor requirements. This not only applies for accessibility in the solution, but to help and training considerations. In addition to providing printed material for user documentation, you may want to provide audio or videotapes that detail the same information. This should also be discussed for any training sessions that will be used for teaching users how to use the solution. Audio tapes are useful for the blind and visually impaired, while sign language can be used in videotapes used for deaf and hearing-impaired individuals. In addition, you should also consider providing wheelchair-accessible facilities for people who are physically disabled. This will allow them to reach the training, and get the assistance they need to learn about your solution.

On the Job: While these issues may seem like common sense, many training sessions have been in facilities that were inaccessible to people who are physically disabled. Visually intensive presentations have also backfired when dealing with an audience of blind and visually impaired people, while trainers have also experienced the foolish feeling of addressing a room filled with deaf people, without benefit of an interpreter to use sign language. Always discuss what considerations need to be made for such users, so they can have a good experience with your solution.

Analyzing Requirements for Integrating a Solution with Existing Applications

No application you create will be the only one existing on a system. At the very least, it will work with an operating system. It’s not uncommon for your solution to work with other solutions on a user’s computer or the network. Rather than reproducing the functionality of another solution that’s already in use, it’s much simpler and more efficient to simply have your solution work with the other programs already in place. To make your solution plays well with others, though, you need to identify what programs your solution will interact with, and how those solutions will affect your design.

Be sure to go over each item in the following checklist when analyzing integration of your solution with existing applications. Each is explained in the sections that follow.

Checklist 8-2 Analyzing Requirements for Integrating Existing Applications

Legacy applications
Format and location of existing data
Connectivity to existing applications
Data conversion
Data enhancement requirements

Legacy Applications

Companies put a lot of money into application development, and purchasing off-the-shelf solutions. Therefore, they’re often a little hesitant about putting these applications out to pasture. For example, after considerable cost in buying mainframe computers and creating programs for these beasts, they may not want to trash an entire system to be up to date. In addition, workstations may use older operating systems (such as Windows 3.1 or 3.11), because the company either can’t afford to upgrade everyone at once, or decided that users of these systems don’t need the upgrade just yet. Regardless of the reasoning, you need to determine which legacy applications and operating systems your solution will interact with, so that you can incorporate support for these legacy solutions and systems into your design.

Format and Location of Existing Data

Where data currently resides, and the format it resides in, should be identified so you can determine what technologies will be used in your design. If you’re creating a new solution for new data, this doesn’t become an issue. However, organizations that have been around awhile may require your solution to access data from older data stores or operating systems. Operating systems your solution may need to deal with include NT Server, AS/400, OS/390, Multiple Virtual Storage (MVS/ESA), Virtual Storage Extended (VSE), and Virtual Machine (VM). Data may be stored on mainframes, which use storage technologies like DB2, VSAM, or IMS. If SQL Server is used, then the data may be distributed. As you can see, there are many different storage systems your solution may need to work with. It’s important to determine which of these may affect the design of your solution, so the design can incorporate technologies to access and work with these storage systems.

Connectivity to Existing Applications

It is important to determine early in the project if your solution will need to interact with existing applications, and how they’ll need to connect to these applications. If this is the case, and methods of connectivity need to be determined, an analysis of how you to connect to the application needs to be made. Certain drivers may need to be implemented or designed; where the other application resides should be established; protocols used by the other solution will need to be identified; and technologies used to connect to the application need to be determined. These things should be done early in the project, so that the necessary work can be incorporated into the design before coding begins.

Data Conversion

You should discuss with the business and customer whether data will need to be converted from the current format into a newer format. For example, if data is currently stored on the mainframe, they may want that data converted into another format for use on a different system, such as SQL Server. This should be determined early, so you can plan for the conversion in your project.

In addition, data may need to be converted into a different format for display. This is relevant for Web-based applications, where the data may need to be converted into HTML for display on the Internet. In determining whether conversion of this type is necessary, you may also be discovering the type of application or output required from your solution.

Data Enhancement Requirements

It is important to determine whether there are any data enhancement requirements that will need to be implemented as part of your design. This involves looking at the data being used, and the type of database that your solution will interact with, and determining if changes need to be made. For example, not all data types are supported by every development platform. If data being stored is of a type not supported by Visual Basic, Visual C++, or whatever programming language you’re developing in, you will need to convert the data. If data is being stored in a database that doesn’t support certain functionality, you will need to convert or migrate the data into one that will support it.

In addition, you should discuss any changes to the current database. This includes removing data that’s no longer required, or adding new data tables, columns, and so forth to the database. The data format will influence the design of your visual interface, and determine what data will be accessed by the application.

An issue that’s come out in recent years deals with the Y2K (Year 2000) problem. In previous years, programmers used two digits to represent the year. For example, 1998 would be 98, 1999 would be 99, but the problem is that when the year 2000 hits, computers will read this data as 00—and the computer will consider this two-digit number as the year 1900. You will need to consider the Y2K problem in designing your solution.

Analyzing Existing Business Methodologies and Limitations

An important part of creating a good solution, and having a project succeed, is taking into account the methodologies and limitations of the organization. This includes such things as how the organization conducts business, legal issues that will affect the solution, and the structure of the organization. These issues are addressed and applied to the design of the solution, and require analysis before the product can be developed and used.

Be sure to go over each item in the following checklist when analyzing methodologies and limitations of the organization. Each is explained in the sections that follow.

Checklist 8-3 Analyzing Methodologies and Limitations

Legal issues
Current business practices
Organization structure
Budget
Implementation and training methodologies
Quality control requirements
Customer's needs

Legal Issues

Everybody hates the law, unless it works in their favor. Unfortunately, to get the law to work in your favor, you need to identify the legal issues that will apply to your situation. For solution design, this means looking at each of the legal issues that will affect the business and your solution. In doing so, you can create solutions that adhere to legal requirements, and thereby avoid problems that may occur from inadvertently breaking the law.

There are many legal issues that can affect a solution. For example, copyright and trademarks violations can result in a lawsuit, or cause significant problems for a business. Once a business has a copyright put on their logo, or has it become a trademark of the company, you shouldn’t modify it in any way when applying a graphic of it to your solution. If you use a name or image that another company has a copyright or trademark on, the business probably will be sued.

Taxes are a common legal issue that need to be identified for sales software, and other solutions that require adding taxes to specific figures. Tax rates differ from province to province, state to state or country to country. While one area may have a 7 percent sales tax, another may have a 5 percent or 8 percent tax rate. In addition, there may be hidden taxes or special taxes that need to be applied to figures. An example of this is a food or bar tax applied to customer bills in a restaurant sales application. By failing to apply the correct tax, or not applying it at all, you can cause severe legal and financial problems for the business.

Regardless of the solution, it’s vital to understand the legal issues that can affect your solution. This may be laws such as tax rates, which need to be applied to the solution, or other legal issues that control what images, text, and so forth can be included in your solution.

Current Business Practices

Business practices and policies will determine the functionality of your solution. The policies and practices show how the organization conducts its business, and these policies and practices will be applied to the design of the solution. It’s important to remember that design and development of a solution is driven by the needs of the business, and affected by how the practices and policies within the business.

Organization Structure

Understanding the structure of the organization will help in the design of your project in a number of ways. First, it will help to understand how work flows through the organization, and how the work performed with your solution will be passed up the corporate ladder. For example, if you were creating a solution for processing payroll, the application may need to have data passed through several departments in the organization, before a paycheck can be cut. First, a manager may need to approve the number of hours worked by an employee. This may then be sent to an accounting officer at corporate headquarters, who approves the schedule and passes it to payroll. The check is cut and needs to be sent to the employee. This data on payouts will then be sent to accounts payable, who adjust the books to show this has been paid. As you can see, by understanding the corporate structure, you will understand how work flows through the organization.

Another important reason for understanding the structure of an organization is to know the hierarchy that your project team will deal with. Who do you go to for information on a particular department or division or the corporation? If the project team has a problem, what is the pecking order of who to go to for information and discussing problems on specific areas? Since you don’t want to go over someone’s head, and have that person as your enemy, it’s wise to understand how the organization is set up, so you can deal with it effectively.

Budget

Budget determines the finances that can be allotted to your project, and is something that needs to be determined early. The budget will always be a limitation of your project, as no one is lucky enough to have an unlimited supply of wealth to pay for the resources necessary to create the product. This can determine the computers, tools, facilities, and other issues that will make up the project. It also affects the number of people you can have on your project team.

Since project members want to be paid for their labor, and there is a limited amount of money to pay these members, the budget will affect how many people will work on the project and for how long. Large projects with big budgets will have larger teams, which can consist of team leaders and multiple members in each role of the team model. Smaller projects with limited budgets may have a single person in each role of the team model, while even more limited budgets may require members to take on more responsibility and serve in more than one role. As you can see, by determining the budget of a project, you are subsequently determining how your team will be structured and the resources available.

With budgets, it’s important to allot a percentage of funds for overtime, and to prepare for the possibility of the project going over schedule. If you planned that your project would take two months to complete, and it goes two weeks over schedule, this means that two weeks of paychecks and other bills will need to be paid. By not including a buffer for unanticipated events, you will go over budget, or possibly deplete the budget before the project is finished.

Implementation and Training Methodologies

Different organizations may have different methodologies that they use for training users and implementing new products. It’s important to determine what requirements the customer may have in this regard, so that the effected roles of the team model can incorporate these into their respective plans. For example, User Education will want to acknowledge any special training methods that need to be used when creating their training plans. This is something we will discuss in greater detail in the chapters that follow. This information should be derived from the customer early in the project.

Quality Control Requirements

Quality control determines the number of defects your product will have and have removed by release, thereby affecting the quality of the product. In other words, the more errors and bugs your software has when it’s released, the lower the quality of your product. There are two main ways of ensuring a high-quality product: good design and proper testing.

Many of the problems that occur in software could have been avoided in the design phase of the project. By having open discussions on the requirements of the solution, you can avoid implementing features and functionality that aren’t required in a solution, and put in what the customer feels is higher priority. Good design requires using good models, such as those we’ve discussed in the previous chapters, and following the methods we’ve discussed throughout this book. By using proven models and methodologies, you will improve the quality of your product.

Testing is responsible for finding defects in the product, which have occurred after Development has finished coding the software. When they find bugs and errors in the program, it needs to go back to the developers who fix the problem. Testing then checks the newly coded software for any additional problems. This is because when code is modified, new bugs and errors may result. When Testing is complete, the level of quality control has been met, and the product can now be passed forward to the next stage of the project.

There is a direct correlation between quality control and the schedule of a project. It’s been found that projects with the lowest number of defects have also had the shortest schedules. When a product is properly designed, the project will run smoother, and fewer defects will occur. When Testing finds defects, the code must go back to Development to be fixed. Additional testing time must be added to the schedule to ensure no further problems have resulted from the modified code. If a project has too tight a schedule, and needs to rush testing or allots too short a testing period, the number of defects in a product will be higher. This is because testing won’t have enough time to properly test the product, and errors and bugs will slip through.

Customer's Needs

The needs of the customer are what drive the solution’s design and development. Because of this, it’s vital that you have a clear understanding of what those needs are, so you can design an application to address them. If you don’t understand what the customer needs, the project will fail.

It’s important to remember that the customer may not be the person who is actually using the program. That is the end user, and not necessarily the customer. The customer buys the product, and generally represents the interests of the organization. By understanding the customer’s needs, you generally identify the needs of the business itself.

Analyzing Scalability Requirements

Scalability accounts for the expansion of elements that affect your solution. It deals with such issues as the growth of user numbers, data, the organization, and cycle of use. As time passes, businesses may get bigger, more employees may be hired, and the amount of data stored and retrieved by your solution will proportionately increase. Because of this, it’s important to identify and analyze scalability requirements so that your solution is able to handle such growth.

If a solution fails to account for scalability, it can quickly become ineffective and outdated. If a company planned to merge with another business, and the number of people using the application and its data doubled, you would need to know this before designing the solution. While the customer generally won’t reveal information as sensitive as this, they will often tell you that there are plans for an additional number of users to be using the product after so many months or years. By taking such information into account, and scaling your solution to meet these requirements, your solution won’t become useless once such growth occurs.

As the number of users, the amount of data being stored and retrieved, and the size of the organization increase, it can affect the usability and performance of the solution. If your solution can’t effectively handle the growth, than a new version of the solution must be designed and developed. To keep a solution from requiring such upgrading before its time, scalability factors should be incorporated into the design.

Familiarize yourself with the items in the following checklist when analyzing scalability requirements. Each is explained in the sections that follow.

Checklist 8-4 Analyzing Scalability Requirements

Growth of audience
Organization
Data
Cycle of use

Growth of Audience

Mergers, upsizing, expansion, and hiring less expensive employees after retiring high paid ones are common topics in the news. From a solution design standpoint, this means an increase in the number of people who will use the solution. As the growth of audience increases, additional stress is placed on the system and network solutions used by this audience. By identifying such growth factors early, you can incorporate them into your design. Rather than designing a solution for the current number of users, you can plan to develop a solution that will support the number of users expected to use the application in the future.

Organization

Organizations grow just like people. While an organization may reside on a single campus today, it may have offices across the country or the world a year or more from now. Such growth can affect the performance and usability of a solution. Networks are spread out, WAN lines may be used, and technologies not currently in use today will be required later. It’s important to identify the estimated expansion of the organization early in design, so these growth factors can be incorporated into the solution’s design.

Data

As more users access a database, the amount of data being stored and retrieved increases proportionally. By determining the number of users expected in the future, and multiplying the current amount of data being accessed by that amount, you can determine the growth in data access.

However, new users aren’t the only way that data storage may increase. For example, if the company experiences an increase in sales, more customers and orders will be added to a sales database. Here, the estimated growth of data is a reflection of increased sales. Similarly, if new types of data are stored to the database, this will also affect data growth. If pictures of employees were scanned and added to an employee database, this new infusion of data would mean that more data will be stored and accessed. Because of how data can grow over time—due to new types of data being used and increases in data access—you need to incorporate these growth factors into your design. By not doing so, performance and usability will suffer.

Cycle of Use

Early in a project, it is important to determine the cycle of use for your software. The cycle of use is the time that your solution will be in use, before new versions of the software will need to be released. This may be the result of the software being time sensitive, so after a specific date, a new version is required. This is something that happens yearly with personal income tax software that’s on the market. This not only affects when development of the new version must take place, but also when the current version must be released to the market. Another common occurrence is the need for new features to be implemented to support new needs. New releases may also be required due to more users having an effect on performance, resulting in a new version of the software required to support the new user population. As you can see, there are numerous elements governing the cycle of use. In many cases, these requirements affecting the cycle of use will vary from project to project. It is important to identify what will have an impact on the lifetime of your solution, and when use of the product dictates newer versions to be release.

Consider the following scenario questions and answers.

Which levels of RAID support fault tolerance?

Fault tolerance is supported by RAID levels 1 (mirroring) and 2 (disk striping with parity).

If only a small number of users are disabled, why do I need to bother with accessibility issues for them?

Many different types of users may use your solution. Some may have disabilities, while others may not. Despite this, each user is a vital player in an organization, and your product should be usable for all of them.

What is localization?

Localization deals with the locality in which a solution will be used. If different languages, character sets, and so forth are required, localization deals with these issues so the product can be used on an international market.

Why should I make my solution scalable?

Scalability takes into account growth within the organization. By making your solution scalable, it can handle increases in users, data, and other issues that will affect the performance and usability of the solution.

Certification Summary

Security is an important part of any solution. Security not only deals with keeping unauthorized users out, but keeping data used by your solution safe and secure. In analyzing the security requirements for your solution, you should determine the level of security required, and the mechanisms currently in place. You should determine whether authorization through user accounts, audit trails, existing security mechanisms, fault tolerance, regiments of backing up data, or other methods are required in your solution. By identifying the security requirements of a solution, and designing a level of security that matches the customer’s and business’s needs, you will be able to create a solution that is as secure as it is useful.

Performance determines how well an application will function under work conditions. If an application runs quickly and efficiently, it has high performance. If it works sluggishly, then it has low performance abilities. Areas to consider when analyzing performance requirements are the number of transactions that will occur over given periods of time, bandwidth, capacity, response time, and barriers that will affect performance. By properly analyzing performance issues and requirements, you can design robust solutions that run well in the workplace.

Maintainability deals with the upkeep of the solution and its source files. This includes such things as a regular regiment of backups, user account maintenance, but also issues that deal with the source code of the application.  To effectively maintain the source code of the solution, you should implement version control, such as through Visual SourceSafe, and implement the use of comments and consistent naming conventions. Maintainability also requires that you determine the breadth of the solution’s distribution, and the methods of distribution to be used.

Extensibility is the ability of a solution to extend its features beyond its original design. Through extensibility, a solution can add components that are created by your development team or by third-party developers. This gives your solution the ability to grow with the needs of the customer, without requiring a new version of the product.

Availability is your solution’s capability to function during the times of operation. With availability, your solution should run without problem, regardless of whether an error or failure occurs. This means implementing fault tolerance and error handling into your solution.

Human factor requirements deal with the factors that will affect the human beings who use the application. This includes such things as accessibility, localization, identification of the target audience, training and help, as well as other issues that govern the user’s ability to effectively use the application.

Some projects may require that your solution work with other solutions that are currently in place in the organization. This means your solution may need to interact with legacy applications, operating systems, and data formats. You’ll need to determine whether data needs to be converted into different formats, and how to connect with the data and existing solutions. By identifying this through discussion with the business, you can implement these requirements into your solution.

Scalability deals with growth factors that will affect the usability and performance of a solution. It deals with increases in the number of users, data storage and retrieval, cycle of use, and growth of the organization itself. By incorporating scalability into your design, the solution is better able to handle such factors.

Two-Minute Drill
For any project you work on, you need to look at issues that will affect the project, and determine how these issues should be addressed and/or implemented in the analysis of requirements.
These issues include such things as security, performance, maintainability, extensibility, availability, scalability, and requirements that deal with human factors.
It’s important that only authorized users are allowed to use your solution and access data, and that any data your solution uses is kept safe and secure. This can be done through any combination of user accounts, audit trails, existing security mechanisms, fault tolerance, regiments of backing up data, and other methods.
A role is a symbolic name used to group the users of a set of software components to determine who can access those components’ interfaces. In analyzing the requirements of an application, it is important to identify the roles people play in an organization, and which roles will apply to the solution you’ll create.
Many times, students who are taking the Microsoft exam confuse the guest and client roles, thinking of clients as customers, who often fall into a guest role. It’s important to distinguish between these two roles. Think of guests as visitors, and clients as workers on the client machines accessing a server. Think of the roles in this  way to help keep the two separate.
In designing any solution that uses security features, you need to determine what impact your solution will have on the existing security environment. Basically, your solution will have one of three effects on existing security in the enterprise:
Enhance it
Diminish or cripple existing security
Have no effect whatsoever on the existing security
Fault tolerance is the ability of a solution to function regardless of whether a fault occurred. In short, it is non-stop availability of a solution.
Microsoft NT Server supports only levels 0,1, and 5 of RAID. While there are other levels of RAID—levels 0 through 10, and level 53—they won’t appear on the exam. You should know the three levels mentioned here, especially 1 and 5, which are fault tolerant.
Areas in which you’ll want to plan for maintaining the program include backing up data, and how user accounts will be maintained.
Planning for maintainability means looking at what the application or service will require in terms of future use.
The security context of a solution is made up of its security attributes or rules. This determines how a user is able to access, and what the user is able to do once access is achieved.
Auditing is where activities performed by certain or all users are documented to a file, allowing a user (such as the administrator) to see what these users have done.
It’s important to determine what existing mechanisms for security policies are already in place in an organization, before deciding on new ones that could conflict with or duplicate them.
Performance determines how well an application works under daily conditions. This means that despite increases in the number of users accessing data, the volume of transactions over a given period, or other issues, the application performs as its required in a speedy and efficient manner.
A transaction is one or more separate actions that are grouped together, and executed as a single action.
Bandwidth is the network’s capacity to carry information, and is measured in data transferred per unit of time, usually seconds.
Capacity is the volume or limit something can handle, and can have a direct effect on the performance of your solution.
You should determine what standards currently exist in an organization when determining the performance requirements of your solution.
It is vital that you determine both peak and average requirements for your solution. Peak values show the highest amount of activity your solution experiences at a given time.
Response-time is the difference in time between the end of a query or command to when the results of that query or command first begin to appear on the screen.
There are a number of methods of distribution that you can use to get your solution to the customer, including floppy disk, CD-ROM, and network or Internet distribution
It’s important to understand what the customer expects in terms of maintenance. This will aid in determining what will need to be put into place to support the solution, and any future versions that may be created.
The maintenance staff is the group of people who will support and maintain the solution once it’s been implemented. This doesn’t mean that they have anything to do with maintaining source code—that’s in the realm of the Development team.
Extensibility is the ability of an application to go beyond its original design. By incorporating extensibility into the design of your application, the solution is able to extend its capabilities, and provide additional functionality.
Availability is the ability of users to access and use the solution. The importance of this is simple: if the business can’t use the functionality of the solution, then they can’t benefit from it.
You’re not analyzing the requirements and designing a solution for the models, lines of code, bits, and bytes that make up the application. You’re doing this for the human beings who will buy and use the solution.
To make your solution plays well with others, though, you need to identify what programs your solution will interact with, and how those solutions will affect your design
An important part of creating a good solution, and having a project succeed, is taking into account the methodologies and limitations of the organization.
Scalability accounts for the expansion of elements that affect your solution. It deals with such issues as the growth of user numbers, data, the organization, and cycle of use.

Answers to Exercise 8-1

Administrators

Clients

Guests

Backup

Darren

20 employees

Customer

Julie

Answers to Exercise 8-2

  1. There will be a peak requirement of 300 users accessing the application.
  2. Half the total users will spend time using the solution. This means an average of 150 users per hour.
  3. Since the solution will be accessed between 9 and 5, and replication will take place between the two offices every 15 minutes, backups should be done when the business is closed.