Understanding
the Purpose and Value of Conceptual Design
Applying
the Conceptual Design Process to a Business Solution
Gathering
User Requirements and Gaining User Perspective
Techniques
for Gathering User Input
Synthesizing
Business and User Perspectives
Constructing
Conceptual Designs Based on Scenarios
Designing
the Future Work State: Identifying Appropriate Solution Types
Defining
the Technical Architecture for a Solution
Identifying
Appropriate Technologies for Implementing the Business Solution
The
Technology Environment of the Company
Determining
the Type of Solution
Choosing
a Data Storage Architecture
Testing
the Feasibility of a Proposed Technical Architecture
Developing
an Appropriate Deployment Strategy
Validating
and Refining through User Feedback
Answer
to Exercise 9-1: Usage Scenario for Video Store Rental Using Task Sequence Model
Answers
to Exercise 9-2: Identifying Different Types Of Solutions
· Understanding the Purpose and Value of Conceptual Design
· Applying the Conceptual Design Process to a Business Solution
· Gathering User Requirements and Gaining User Perspective
· Gaining Business Perspective
· Synthesizing Business and User Perspectives
· Constructing Conceptual Designs Based on Scenarios
· Designing the Future Work State: Identifying Appropriate Solution Types
· Defining the Technical Architecture for a Solution
· Validating and Refining Through User Feedback
As we saw in Chapter 2, Conceptual Design is the first part of the Application model. It is here that you look at activities performed by the users, tasks that must be performed to solve a business problem, and then lay the groundwork for your solution to deal with these issues.
As we’ll see in this chapter, there is a considerable amount of work that goes into the Conceptual Design process. Information is gathered on users and their roles through a variety of methods; a perspective on the business is obtained; and usage scenarios are generated. These and other tasks set the foundation for your solution design, and are passed into the logical design of the solution.
When you were in school, you probably had to write essays on certain topics. The first step in creating the essay was coming up with a concept for the paper, and determining what issues it would address. This meant gathering information. It would be applied to subtopics included in the essay, which determined the overall structure of the paper. In the same way that your essays started with a basic concept and were based on the information gathered for the paper, conceptual design is where you determine what users actually need to do, and forge the basic concept of what your application will become.
Conceptual design looks at the activities performed by the business, and the tasks a user needs to perform to address a certain problem. Information is gathered through this design perspective, which is then applied to the logical and physical design perspectives. This information consists of input from users, who identify their needs and what is required from the application. In other words, they explain what they want to do and how they want to do it. In this process they tell you how they envision the application. By interviewing the people who will use the application, conceptual design allows you to obtain an understanding of the problems faced by a business, and identify the standards by which users will judge the completed product.
An application created for an enterprise is called a “business solution” because it’s built to solve specific business problems. In other words, your application becomes the answer to the enterprise’s question: “How are we going to deal with this problem?” Therefore, the development of an application is driven by the needs of the business. Regardless of where you are in the design process, you should be able to trace the current state of design to the business’s needs. If you can’t, it means that your team has lost focus on the application’s purpose.
Potentially the most damaging phase of designing a distributed application is in the conceptual phase. Design flaws that are instituted in the conceptual phase can produce multiple logical or physical instances of application design or functionality that either do not address a business need or ignore user requirements. Design flaws can surface in many forms in the physical phase, including poorly scaling COM components, inefficient data service components, user interfaces that are difficult to navigate, poorly designed or bloated database structures, or resource-hungry tiers in general. While this is by no means an all-encompassing list, design flaws usually begin in the conceptual phase. Therefore, spending that extra effort to produce the best possible conceptual design can yield important benefits.
One of the key targets for ensuring that an application meets requirements is the proper identification of the end users. It is usually easy to identify the class of individuals who will navigate through the user interface as users, but many times, other classes of application users are completely ignored. If the information produced by the application is destined for vendors, partners, or remote offices, users may include many other groups of individuals. Also, end users may not be actual individuals. Departments, divisions, and the business or company itself should be considered users of the application. Failure to recognize non-physical entities properly as users of an application can detrimentallyaffect the conceptual design phase.
—Michael Lane Thomas, MCSE+I, MCSD, MCP+SB, MSS, MCT, A+
The conceptual design process is made up of several tasks, which are used to determine and substantiate the requirements of the application:
· Identifying users and their roles
· Gathering input from users
· Validation of the design
These procedures are important, because while you want to get user input, some users may request features that aren’t necessary. For example, while a user may think it important to have a web browser included in a spreadsheet program, this wouldn’t be essential if there were no connections to the Internet and no plans for a corporate intranet. By working through the tasks, you can determine which are valid and invalid requirements.
As with most things in life, it’s important to know who you’re dealing with when designing an application. It’s for that reason that the first step in conceptual design is identifying users and their roles. Because the purpose of conceptual design is to determine the business needs that drive application development, the term user can refer to a single end-user, the business as a whole, various units that make up the business, or other interested parties. As we’ll see later in this chapter, the reason for this is to gain the fullest possible vision of what the application should be. If only individual end users were polled for input, the product would be usable but wouldn’t necessarily reflect the goals and needs of the enterprise itself.
In identifying the different users that provide requirements and perspectives used for the design of your business solution, user profiles are created. User profiles are documents that describe who you’re dealing with, and provide a depiction of the people and groups who use the system. This information is used to organize how data will be gathered, and identify from whom this input is gathered. These profiles can also be created at the time you generate usage scenarios, which we’ll discuss later in this chapter.
There are many different techniques for gathering input and each has its benefits and drawbacks. These include such methods as user interviews, focus groups, and surveys. These methods address the user’s viewpoints and requirements, rather than technical elements that will go into the application. Later in this chapter, we’ll discuss each of these and other methods for gathering input. It is through these techniques that you gain an understanding of what users need and expect from your application.
Once information has been gathered, it is applied to
usage scenarios. Usage scenarios are
used to depict the system requirements in the context of the user, by showing
how business processes are, or should be, executed. Usage scenarios take the raw
data that’s been gathered, and apply it to a step-by-step documentation of
what occurs first, second, third, and so on in the execution of a specific task.
This transforms the requirements you’ve gathered into the context of how
features, processes, or functions are used.
The final step in the conceptual design process is validating your design. This presents your understanding of what’s required from the product to the people who have a stake in the product’s design. By taking the end user and other interested parties step-by-step through the usage scenarios you’ve created, you’re able to determine if you’ve correctly understood what’s required of the application. You can also create prototypes of the user interface, which allow the user to provide input on whether the interface suits their needs.
Conceptual design ends with several deliverables that are injected into the logical design of the application. Once completing the conceptual design process, you will have several deliverables in your possession:
![]() | User profiles, which identifies users, their roles and responsibilities, and who played a part in providing information |
![]() | Usage scenarios, which depict how the needs and perceptions of the system translate to actual tasks, by showing a step-by-step picture of how business processes are executed |
Once you’ve reached the end of the conceptual design process, you are generally ready to apply these deliverables to the logical design. If you need to, you can return to the conceptual design process to determine needs and perceptions of other features or functionality in your program. In addition, once one feature, function, or process has been conceptually designed, you can apply this to logical design, and then begin conceptually designing another element of your application. This provides great flexibility in designing your business solution.
Conceptual design is the foundation of what your business solution will become. This design method is a perspective of the Solutions Design model, which ties the Application, Team, and Process models together. Conceptual design provides requirements that are passed forward to logical design, and used by the Application model to separate an application into distinct services. Conceptual design takes place immediately after the envisioning phase of the Process model, and is the first description of what a system does to solve the problems outlined in the vision/scope document. Through conceptual design you get a specific description of what you’ve actually been hired to build.
The Team model is used to specify what team members’ responsibilities in the conceptual design of the product. While every member of the team is involved in the conceptual design, Product and Program Management have the greatest responsibility. Program Management has ownership of the conceptual design, and is responsible for driving the conceptual design process. Product Management is responsible for gathering user input and validating usage scenarios, which we’ll discuss later in this chapter. Despite the fact that the major responsibilities lie with these two roles, each member of the team participates in the conceptual design.
Development is where coding of the program takes place, so this role aids in the conceptual design process by evaluating issues that would affect how the business solution is coded. Development evaluates the current and future states of the system, and helps to identify risks that may arise when the conceptual design is applied to the logical and physical design perspectives. Because the policies of a business can have an effect on the functions provided in a business solution, Development also has the responsibility of analyzing business policies. If the policies aren’t complete or consistent, Development needs to catch this, so the product isn't built on false assumptions of what the rules of business entail. This ensures that the features going into the application don’t conflict with the goals and interests of the business itself.
Since User Education has the responsibility for ensuring users have a good experience with the product, this role has the duty of finding usability issues that appear in the conceptual design. User Education looks at the usage scenarios generated during conceptual design, and determines training and other forms of user education that will be required from changes to the current system. This allows the product to have support generated as early as possible in the project.
Testing acts as an advocate of the end user, and determines what issues may be problematic. This role looks at the conceptual design, validates usage scenarios, and identifies testability issues and conflicts between user requirements and changes to the system. This allows Testing to determine what issues will require testing later. By validating usage scenarios, Testing is able to determine whether changes made to the system are compliant with the requirements of the user.
Logistics has the responsibility for a smooth rollout of the product. By analyzing the conceptual design, they can identify rollout and infrastructure issues resulting from changes. This allows Logistics to be prepared for problems that could be faced by support and operations groups later. By identifying these issues early, during the conceptual design, the risks of problems occurring later are minimized.
User requirements and perspectives are acquired from more than the end user who actually uses the product. As mentioned earlier in this chapter, a user in conceptual design can refer to a number of different people, including the following:
![]() | An individual end user, who interacts with the system through a user interface |
![]() | A department or unit in the business, comprised of many end users and having policies that dictate specific needs and requirements |
![]() | The business itself, which has its own goals, interests, and needs |
![]() | The customer, who pays for the business solution |
![]() | Other interested parties, such as suppliers and support |
It is important to gather the viewpoints and needs of each of these groups, to gain a full vision of what the application entails. If you have input only from the end user, you miss information from key people who could guide your design.
In gathering user requirements, the business and the units within it become independent entities. The organization, and its departments and units, are seen as individual users with their own needs and perspectives. This is because the business will have interests that may be different from the person who actually uses your application. For example, a clerk acting as an end user of the application will be concerned with the interface, and the features that deal directly with his or her work. An accounting department interested in saving money may have interests directly opposed to that of a purchasing department. The business as a whole will have goals and interests for which each of these departments may not be fully aware.
In looking at these different groups and individuals, there is one common factor: each has a stake in solving the business problem. However, because each may have different interests, it is possible for their ideas to conflict or differ. To get the widest degree of input, it is important to identify each type of user if you’re to properly gather requirements and gain user perspective.
When identifying users and their roles in the organization, it’s important to document who they are, in addition to the information they provide. It’s for this reason that user profiles are created in this first stage of conceptual design. User profiles are documents that describe the person or group with whom you’re dealing. This includes their opinions toward the current system, and what they’d like to see in the new application you’re developing.
Just as no two projects are exactly alike, you’ll find that the information you gather will differ between types of users and projects. There are, however, common forms of information you’ll gather no matter what type of projects or users you encounter:
![]() | The requirements that are perceived to be important |
![]() | The perception of the current system |
![]() | Work responsibilities |
![]() | Experience |
![]() | Physical and psychological factors |
This information is included in the user profiles you create. They address how the user views the current and future state of the system, their level of experience and responsibilities to the organization, and factors that will affect the development of the software product. By including this information, you get a complete picture of who you’re creating the product for, and what needs will drive application development.
Because organizations and the units that make up the business are viewed as individual entities, it is important to create user profiles and gather information from them as well. Each department and unit, in addition to the business as a whole, while have its own goals, needs, and interests. This is always the case, regardless of whether you’re dealing with a large corporation or a small video store.
While any user profile you create will include the information we discussed earlier, additional information needs to be added to the user profiles of a business, department, or unit. This is because you’re dealing with groups of people, rather than individuals. The additional information helps to identify who you’re acquiring the requirements and perspectives from, and includes such things as the following:
![]() | The name of the business, department, or unit (such as accounting, purchasing, etc.) |
![]() | The number of employees |
![]() | Their responsibilities, activities, or mission |
![]() | The perception of the current state of the system |
![]() | The perception of what’s required in the business solution |
The perceptions and needs provided through this information is given from an organizational standpoint. Any requirements or viewpoints from an individual person’s standpoint are included in a user profile of that particular person (e.g., the customer or end user’s profile).
In addition to the information listed above, you should also include a listing of terms, definitions, and jargon used in the organization. This not only allows you to understand what end users are talking about when you interview them individually, but gives you an understanding of language that may be included in the application itself. To give you an example of how important this is, let’s say the organization that’s contracted you to create an application is the military. The military is known for its use (or perhaps overuse) of abbreviations, specialized terms, and jargon. If you didn’t know that a unit you’d be tying into your enterprise application was referred to as “JAG,” and that term was an abbreviation for the Judge Advocate General’s office, your ignorance of the local jargon could lead to problems elsewhere. It’s always important to know what people are talking about when you’re trying to understand their needs and viewpoints.
Once you’ve identified who you’re going to acquire information from, you’re ready to gather that information. There are many ways of gathering the input you need from users. These techniques include the following:
· User interviews
· JAD (Joint Application Design) sessions
· User surveys
· Focus groups
· Shadowing
· Consulting with the organization’s help desk
While each technique has its own benefits and drawbacks, you can use one or several in conjunction to get the information you need on what’s expected from your business solution.
User interviews are a common method of gathering input from users. In using this technique, you select a group of users from the organization to meet with your team. Each user meets with a team member individually, to discuss their views on what is needed in the application you’re creating. In this informal meeting of two people, you’re able to get detailed information on what the user feels are relevant issues to your project’s design.
User interviews can be a time-consuming process, and you’ll find that the value of the information you acquire depends on two things: the users you selected and the skills of the interviewer. If you choose a poor selection of users, they may not be able to tell you much about problems they’ve encountered or what they’d like to see in the new system. It’s important to pick users who have been with the organization for awhile, and have enough experience to know what they’re talking about. If you do have a good selection of users, you must have a team member with good interviewing skills. Often, it’s useful to conduct mock interviews between team members. This will weed out the bad interviewers and determine who has the skills to conduct real interviews with users.
On the Job: In conducting user interviews, it’s vital that you make the user feel comfortable enough to discuss their views, needs, and any problems they have with the current system. Pick a comfortable setting for the meeting. Start the interview with a few minutes of idle chat, joke with them, and let them know that anything they say is in confidence. You’ll be amazed how often people hold back information, fearing they’ll be seen as a troublemaker or that it will go against them in job evaluations, because they said bad things about the current system.
JAD (joint application design) sessions can also be used to acquire valuable information from users. JAD brings team members, end users, and other interested parties together to design the concept of your product. In the session, you don’t involve the user in the functional design of the application, but discuss the business concerns that will affect design. Remember, you’re dealing with end users and executives who may know how to run a business, but not a computer. The session consists of everyone exploring what tasks need to be performed and why.
When creating complex applications, JAD sessions will often deal with specific features or functionality to be included in the design of the application. The people invited to these sessions have direct experience or knowledge pertaining to those aspects of the project. For example, if you were creating software that verified credit card applications, you might have one session with the users from the credit card company, and another that focused on the credit bureau that verifies an applicant’s credit information. This allows you to get input on product components from people who will interact with them.
JAD sessions can last several days, and you will generally get huge amounts of information from the people attending them. Because of this, it is important to schedule time after these sessions, so your team can assimilate and organize the data. While it brings everyone together to discuss the issues, it’s also important to realize that not everyone feels comfortable in group settings. Sometimes people feel intimidated by JAD sessions. This is especially true when a person is sitting across from their boss, or other senior management in the company. When conducting these sessions, it’s important to keep it as informal as possible and make everyone feel they’re part of a team. You can also use other methods of gathering input with JAD sessions, to ensure that everyone’s point of view has be shared with the team.
Like JAD sessions, focus groups can be used to gather information from collections of people. Focus groups take a sampling of users, and bring them together to provide input on their needs and perceptions. Unlike other methods, a trained interviewer is used to solicit information from users, which makes focus groups more expensive to implement than other methods. This technique is commonly used when there are a considerable number of users involved. If you were creating an application that was to be used by hundreds or thousands of users, it would be impossible to conduct interviews with each user, or invite each of them to JAD sessions. Focus groups solve this problem by using a select group of individuals to represent the interests of people facing similar problems.
User surveys are a method that can get input from any number of users. They are cheap and fast to implement. In creating a user survey, you generate a document that addresses what you perceive to be important issues, based on information obtained from the vision/scope document. Users then answer questions in the survey to provide you with the input you need to design your application. Because each user answers the same standard questions, you can quickly tabulate the answers, and determine what are the most common requirements and issues faced by users.
The problem with this approach is that there is no real interaction with users. Dropping a survey form on their desk, or asking the user a series of standard questions doesn’t provide the user with opportunities to address issues not covered in the survey. While user surveys are a valuable tool, you should consider using them with other, more interactive forms of gathering input.
Another technique, shadowing, involves following the user as he or she performs the daily tasks your application will address. The user explains the tasks while performing them, and is encouraged to provide as much detail as possible. In following the user, you can observe and listen, or ask questions about the tasks being performed. This allows you to get the information you need first-hand, while seeing how the tasks you’ll address in your application are currently performed.
For this technique to work, you need to have users available who will allow you to follow them around. Allowing someone to shadow your movements can be intimidating, and many people aren’t willing to have someone watching them over their shoulder. An often-overlooked resource for acquiring user input is the help desk of an organization. Help desks should keep records of the problems experienced by users, and the staff of a help desk can often provide you with information on common issues reported on an application. If you’re improving an earlier or similar version of an application, you can use the help desk to determine issues faced by users.
Once you’ve determined the needs of end users, the business entity, and its departments, you need to combine the different perspectives into a single concept. This entails sifting through the information you’ve gathered to determine what will be included in the design. By going through the user profiles and gathered information, you see what common issues appeared, what end-user requirements conflict with the interests of the business or its departments, and what great ideas appeared that need to be incorporated into your application’s design. In doing so, you acquire a common understanding of what’s really needed for your product to succeed.
When synthesizing business and other user perspectives, it’s important to ensure that no conflicts go into the final conceptual design. In weeding out these conflicts, it’s important that the requirements of an end user don’t go against the goals or interests of the department or the business as a whole. For example, let’s say you were creating a software product for a lab that analyzed blood samples. In gathering input, several end users state that entering an input number for samples twice is redundant, and lowers their productivity. While it may seem like a good idea to remove this procedure from your design, the unit may have this in place as a quality assurance measure. This reflects the goals of the entire business and should not be removed. While a particular procedure in the system may appear to lower productivity for the end user, there may be a good reason it is in place.
Generally, a good rule of thumb is the bigger the user, the more important the requirement. Because of this, you can usually follow this order of importance, from lowest to highest: end user, units, departments, and finally business. At the same time, it’s important to remember that changes in company procedures often occur from the input of people within the organization. If a suggestion from a user seems like a good idea, you should always check with senior management in the company to see how it will affect the goals and interests of the business and its units.
A scenario is an outline or plot of a sequence of events. As an outline, it provides a point-by-point, step-by-step analysis that depicts the procedure that occurs when something is done. As a plot, it provides a paragraph or two that tells a brief description or story. To illustrate this, think of how you’d explain making a cup of coffee. You could give a description, and say “When I make a cup of coffee, I first I set out my kettle, coffee, milk, and sugar. I then boil the water, and put a spoonful of instant coffee in a cup. I add the hot water to the cup, and add milk, sugar, or nothing to it.” You could also write it down for someone like this:
PRECONDITIONS:
Kettle, coffee, milk, and sugar are set out.
Coffee drinker has access to these items.
STEPS:
1) Boil the water.
2) Add instant coffee to cup.
3) Add water to cup.
4) Add choice of milk and/or sugar or nothing.
POSTCONDITIONS:
Coffee drinker has a cup of coffee to drink or serve to others.
This would give you an outline of what occurs when a cup of coffee is made. In doing so, you’ve created a scenario of making a cup of coffee. Despite any technical knowledge of the process behind coffee manufacturing, anyone who reads the scenario can understand how to use it.
This same type of documentation is used in constructing the conceptual design of an application. Scenarios are used to provide a clear description of the requirements for an application, by outlining and plotting the sequence of events in performing some action. Scenarios address a business problem by depicting a process, function, or feature in your application in the context of how it will be used. It’s for this reason that they’re called “usage scenarios.”
Because usage scenarios use normal, everyday language to describe how an activity is performed, everyone participating in the conceptual design of the application can understand it. Team members and users alike can review the usage scenario, and see the requirements of the application in the context of the business. This is of particular use later. When validating the design or making tradeoffs, users are able to view the requirements in a fashion that’s easy to understand. The usage scenario provides straightforward documentation that can be used later to show how logical and physical designs map to the requirements of the user. As we’ll see later, you can approach the usage scenario from different perspectives. These perspectives look at the usage of a feature, function, or process in different ways—the order in which work is process, the environment it’s used in, the context in which it’s used, or by the order in which tasks are performed. Regardless of how you approach the usage scenario there are two basic ways of writing the documentation. As we saw with the coffee scenario at the beginning of this section, you can use narrative text or structured text.
Structured or numbered text provides a step-by-step procedure. In using structured text, you should start by mentioning what the usage scenario is for. Is it a business activity, a function? You would then identify what preconditions need to be in place before the first step can occur. If the preconditions aren’t met, then the user can’t proceed to the first step. Once these have been documented, you then write down the steps to take to achieve the desired outcome. If certain steps have steps of their own, you can document them, and provide details on that particular step. This creates a usage scenario for that step in the process. At the end of the usage scenario, you write down the post-conditions of what occurs once the steps have been completed successfully.
Narrative or unstructured text tells a story from the perspective of the person you’re interviewing, or the business itself. This gives it a bit of a personal feel, as it’s basically the testimony of how things get done, what’s required in performing an action, and individual observations. It begins by identifying what the usage scenario describes. Pre- and post-conditions are stated in the narrative text, as are the steps taken to get to the end result.
Regardless of whether you use a narrative or structured text for your usage scenario, you can augment your scenario by using charts, workflow diagrams, prototypes, and other graphic representations. It’s not uncommon to generate a table that outlines a usage scenario, by breaking a task into columns of who, what, when, where, and how. You can also mix the narrative and structured versions of usage scenarios, so that they are a combination of both.
Exam Watch: Usage scenarios are used to construct a conceptual design for your application, which is one of the exam objectives. There are different methods of constructing such scenarios in the context of the business and users. While these are included below to help you construct conceptual designs, the task sequence model, workflow process model, and physical environment models aren’t directly addressed in the exam.
Many enterprises have a structured system about how work is routed through the organization. Policies exist in these organizations that dictate which department does what, and in what order. This controls the workflow, defining how work is processed. This ensures that jobs are properly authorized and can be audited.
The workflow process model is used to create usage scenarios that show how specific jobs are routed through an organization. For example, consider how a schedule turns into a weekly paycheck in an organization. The manager creates a work schedule, and documents whether employees showed up for work. At the end of the pay period, this may be sent to a district supervisor, who authorizes that the work schedule is valid. This is then forwarded to the payroll department, who authorizes and prints out a check. The schedule is then archived, a listing of payments is sent to accounting, and the check is sent to the mailroom to be sent to the employee. In creating an application that automates this system, you would need to understand this workflow. If it didn’t adhere to the structure of these business activities, it could jeopardize the security of the process and render your application ineffective.
In using the workflow process model, you need to specify pre- and post-conditions. These are the conditions that need to be met for work to be routed from one area to the next, and what is achieved by a particular step being completed. For example, if an expense claim hadn’t been authorized, it wouldn’t proceed to the next step in the workflow process.
In addition, you should define what forms are used in each step. This will aid in the logical and physical design of your application. By understanding the necessary forms and documents used by the organization, you’ll be able to identify business objects for your design. It will also give you a better understanding of how your user interface should be designed, so that it meets the basic structure of forms currently in use.
In designing an application that’s geared to the needs of the user, you need to understand the tasks he or she does to complete an activity. It’s for this reason that the task sequence model is used to create usage scenarios. This model looks at the series of actions, or sequence of tasks, that a user performs to complete an activity.
Earlier, we saw an example of the task sequence model. This showed the sequence of events that take place in making a cup of coffee. For creating such a usage scenario for your design, you discuss with the person you’re interviewing (or through other techniques we discussed earlier) the steps they take to perform an activity. These are then written in the order each task is performed.
As with the other models, you can create a usage scenario with the task sequence model using either structured or unstructured text. Regardless of which you use, you need to identify the role of the user, and write the usage scenario from their perspective. The role of the user must be identified in the usage scenario so that anyone reading it can understand who is performing the activity.
Exercise 9-1 will demonstrate creating a usage scenario.
Exercise 9-1: Creating a Usage Scenario Using the Task Sequence Model
In this exercise we’ll create a usage scenario for a video store, using the task sequence model. It will be written in structured format. In doing this exercise, read the interview below, then follow the instructions. At the end of this chapter is an example of the completed usage scenario.
“When a customer brings up a video to the counter, I take the customer’s name so I can add the video rental to their membership account. I then take the SKU number of the video the customer is renting, and I add it to the customer’s account. This shows that the customer is renting a particular video. I ask them if they would like video insurance. This is a $0.25 fee, and protects them from having to pay for the video if their machine ruins the tape. If they don’t want the insurance, I have to record it, so the company knows that we asked and it was denied. I then see if they have any promotional coupons or discount cards, which give them free movies. After that, I calculate how much the customer owes, and print out a receipt for them to sign. I tell the customer how much they owe, and whether they want to pay by cash or credit card. The customer takes the video, and I go on to the next customer.”
Usage scenarios are also valuable for understanding the physical environment that your application will be used in. This is because your design can be just as affected by where the application will be used, as how and why it’s used. For example, a database application for a computer that isn’t connected to a network will have everything located on a single computer. If all of the company’s data is stored on a single server, then your application will need to access that data. If the company uses an intranet, it may access data in another building, city, or country. The differences in these environments will affect the design of your application, and the technologies that you incorporate into it.
The physical environment model looks at the environment in which an application will be used. In using this model, you document how an activity relates to the physical environment of the enterprise. This enables you to determine whether data moves to specific locations. With this model, you look at whether a process or business activity moves from one department to another, to other campuses, or across WAN or Internet links to other cities or countries. You also use this model to determine whether specific servers must be used. This allows you to see if your application needs to interact with a SQL server, Microsoft Transaction Server, an Internet server, or some other specialized or specific server in the organization. By looking at how information moves through an organization, you have a clearer understanding of how your application needs to be designed and built.
From the usage scenarios, you gather the information that allows you to identify business objects, as well as the appropriate solution type for your organization. Why is this important? Object-oriented programming uses code-based abstractions of real-world objects and relationships. That’s what an object is. Business objects are abstractions of the people, places, things, concepts, relationships, and other objects that are represented in our application design. In the previous exercise, we can see that the customer account, payment, and video rental are objects that relate to our design. The business objects would be translated into tables and columns that make up our database. It is also used in determining what variables and objects we use in our code and interface, so we can access that data. As we’ll see, where this data resides and how it is accessed has a great effect on solution design.
Single-tier solutions are common in desktop applications, where the user may not even have access to a network. With this type of solution, everything runs on the user’s machine. If the application accesses a database, that database either resides on the machine itself, or is accessed through a mapped network drive. The mapped network drive is done through the operating system and, to the application, appears as if it’s a local hard drive on that machine. Regardless of whether the database is accessed in this method, everything dealing with the application runs on the user’s computer.
As we can see in Figure 9-1,
the User Interface, Business Objects, and Data Service Objects are all designed
as part of the application. Data service objects are used to consolidate data
access. An example of this would be a custom COM component using ADO to connect
to an OLE DB data source. These objects are used to access and communicate with
the data source, and are conceptually part of the middle tier. Business
components encompass business rules and data components, and encapsulate access
functions and data manipulation. The data components serve as a go-between or
intermediary between the business components and data tier of the model.
Because the user interface and these objects are built into the application, and the application and database runs on a single computer, there is no interaction or need to design anything for other computers. You have no need to design components or code that works with servers. All business and data processing takes place on the user’s computer.
Figure 1: Single-tier Solutions have the User Interface, Business and Data Service Objects built into the Application.
Exam Watch: Remember that with single-tier solutions everything is located on one machine. While a hard drive may be mapped to a user’s machine to access the database, it doesn’t mean that another type of solution is being used. The user interface, business, and data service objects, and processing reside on—and the database is accessed through—the user’s machine. No additional components, software, or hardware are necessarily required with this type of solution.
While single-tier solutions are designed for desktops, two-tier solutions are commonly designed for networks where users access a central data source. With two-tier solutions, the database doesn’t reside on the user’s computer. Instead, there is a database server that handles the management of data. A client application, which resides on the user’s machine, is used to interact with the server. This client-server relationship allows processing to be shared by both machines.
In two-tier solutions, less work needs to be done by the application, because the database server handles data management. A database server, such as SQL server, takes care of storing and retrieving data for multiple users. This relieves the application of a significant amount of data processing. The client portion of the distributed application contains the user interface, business, and data service objects. As seen in Figure 9-2, this is unchanged from single-tier solutions. The only difference is that data resides on a different machine, which is accessed by the client through triggers, stored procedures, and/or SQL requests.
Figure 2: Two-Tier Solutions Distributes the Application and Data Source across a Network
While two-tier solutions are a simple method of creating distributed applications, they can cause a considerable amount of network traffic. The network can become congested as numerous users attempt to access data from the single database. At face value, it may seem that the answer to this would be to replicate the database to different servers. This would allow traffic to be split between the different database servers. However, when doing this, users would have access to the same data. While one user updates a record in one database, users of the other database won’t see these changes. This means that, until the data is synchronized between the different servers, users will be using different data. These are some of the considerations to keep in mind when designing this type of application.
To create a distributed application that avoids the problem of network bottlenecks at the database server, you should consider using n-Tier solutions. n-Tier solutions are what developers commonly refer to when talking about Three-Tier solutions. As shown in Figure 9-3, this type of application splits the solution across several computers. The user interface still resides on the user’s computer, and the database remains on the data server. However, the business and/or data service objects are placed into a component or separate application that resides on a server, such as an application server, between the two computers. Rather than bogging down the database server with requests, the application on the user’s machine makes requests to the component on the server between the two computers. If the server with this component gets bogged down, more application servers can be added to the network. Not only does this keep the database intact on one server, it also takes much of the processing requirements off the user’s machine and onto a more powerful server. This makes for a much more efficient solution.
Figure 3: n-Tier Solutions split a solution across multiple computers
By splitting an application with components, you have a great deal of flexibility as to where the business objects and data service objects will be placed. While Figure 9-3 shows these as part of a component on an application server, you could also have either of them as part of the user’s application. It is advisable to make business objects, which include the business rules used by the organization, as part of a component. Should changes be made to a business rule, or business objects need to be modified, you would merely create an updated component for the application server. Depending on the changes made to the business objects, you wouldn’t need to update the application on the machines of users.
While there is a close mapping
between the physical placement of code and the number of tiers in your design,
it should be stressed that your application’s tier design is a conceptual
design issue, and not an issue of where on the system your code executes. The
benefits of using tiers include faster development, the reuse of code, and
easier debugging, which are not dependent on the physical location of your code.
However, as we’ve seen, the physical or centralized location of code placement
can be beneficial to your multi-tier application design.
If there were one type of computer system, one network, and one programming language that applied to everyone’s needs, there wouldn’t be a need for defining the technical architecture of your solution. Unfortunately, life is never that simple, and the technologies, standards, and platforms available are numerous, to say the least. Because of this, it is important to understand the current technology used by an organization, and what needs to be implemented in the future.
The relationship between technology and the design of your application is a double-edged sword. If your design requires certain technical resources to be used, then the organization will need to upgrade their computers, network, models, data architecture, and/or other systems. If the organization isn’t willing to upgrade or can’t because the older systems are still needed, then you need to adapt your design and use development platforms that adhere to these older technologies. Invariably, when either technology or applications require change, or can’t change, the other is affected.
In defining the technology architecture for your solution design, you need to identify the necessary technical resources that need to be implemented or created to support the requirements of your application. This means determining what technologies are appropriate for your business solution, define the data architecture for your solution, determine what are the best tools for developing the application, and decide on what type of solution you’ll be creating. In doing so, you’ll need to look at the current state of technology used in an organization, and decide what the future state should be. In the sections that follow, we’ll cover each of these issues individually.
Exam Watch: While the technologies covered here are those directly covered on the exam, you should try to have as much familiarity with as many as possible. The exam expects that you know at least one programming language before taking the exam, and have some experience with developing solutions.
There are many technologies that you can implement in your business solution: some new and many that are older. By knowing what technologies are available, where they are used, and how they apply to your potential business solution, you can identify which will be the appropriate ones to use. Whether your application needs to access mainframes, the Internet, or utilize certain standards or technologies, it is important to know which ones will apply to your design.
EDI (Electronic Data Interchange,) is a standard for transmitting business data from one computer to another. This business data can include such things as invoices and order forms. The data is sent in a format that both the sender and receiver (called trading partners) can understand. These trading partners make arrangements to exchange data through a point-to-point connection, which usually involves a dial-up connection. This allows the trading partner who was sending the data to transmit it to a BBS (Bulletin Board System) or directly to the receiving partner’s computer.
Though it predates the Internet, EDI is still used as a method of electronic commerce. This is a method of making business transactions (buying and selling) over the Internet. You may have heard of electronic commerce by some of its other names: e-commerce, e-business, or e-tailing (for retail sales over the Internet). Because EDI was used to send documents through a point-to-point connection, with one computer dialing in to another, changes were necessary to the previous standard. The Internet removes the need for trading partners to directly connect with one another, so the previous standard needed to be revised so that EDI could be implemented into e-mail and fax messages.
EDI was developed by the Data Interchange Standards Association (DISA), and became the ANSI (American National Standards Institute) standard. This standard was ANSI X12. Due to its use by businesses, the International Telecommunication’s Union (ITU) incorporated EDI into the X.435 message handling standard. With this standard, data in various native formats can be added to messages, which allows transmission of EDI documents through e-mail and other messaging systems.
The way that EDI works is that business data is contained within a transaction set. A transaction set is a unit of transmission, made up of data that’s framed by a header and trailer, which makes up the message being sent. The data itself is a string of delimited elements. A delimiter is a comma, tab, or some other indicator that separates one piece of data from another. This allows an application reading a segment of data to know where one element ends and another begins. Each element represents a single piece of data. For example, a data element could be an invoice number, name, or price. When put together and separated by delimiters, these elements make a data segment that represents the parts of the document or form. Upon receiving a transaction set, an application that uses EDI reads the data segment between the header and trailer, and uses it accordingly. This allows the application to save the data to a database, display it properly in the application’s user interface, or perform whatever actions the application has been programmed to do.
While having had a considerably
long run, EDI has begun to be replaced in recent years by XML-based languages,
such as OFX. XML (eXtensible Markup
Language) provides developers with a way to create information formats, and
share the format and data over the Internet or corporate intranets. Although XML
has been taking the place of EDI, EDI is still in use in many companies.
The 1990s saw an incredible rise in the popularity of the Internet. It allows millions of people to access information on a business and its products. Using an Internet browser or other Internet applications, anyone with the proper permissions can transfer data to and from other computers connected to the Internet. It doesn’t matter if they’re using PCs, Macs, or UNIX.
The common denominator with the different kinds of computers connected to the Internet is the use of the TCP/IP (Transmission Control Protocol/Internet Protocol ) suite of protocols. The TCP portion is responsible for assembling the packets of data that are transmitted over the Net into their original format. The IP portion is responsible for sending the packet to the correct IP address, ensuring that the packet reaches its proper destination. In addition to these, the TCP/IP protocol suite includes the following:
· FTP (File Transfer Protocol), which allows for the efficient transmission of files on the Internet. Although other protocols have the ability for transmitting files, FTP is specifically designed for basic file transport.
· HTTP (Hypertext Transfer Protocol), which allows for the transmission of Web pages and other files.
· SMTP (Simple Mail Transfer Protocol), which enables electronic mail (e-mail) to be sent and received.
· UDP (User Datagram Protocol), which is used to send and receive packets, but unlike TCP doesn’t guarantee delivery of those packets.
· Telnet, which allows users with proper permissions to log onto remote computers over the Internet and execute commands as if logged on locally.
By adding controls and code to your application that utilize the TCP/IP protocol suite, you can create robust applications for the Internet.
In designing applications that use the Internet, there are a significant number of choices to make. First is determining what type of Internet application you need to design. There are applications that run on the server, the client machine, and applications that are initially stored on the server but downloaded from the Internet to run on the client. Standalone applications can be designed to access the Internet from your application, or browsers can be used to access data and applications from Web servers. While we’ll go through each of the technologies involved in these applications, the point is that a significant number of options are available to you when designing Internet applications.
If you need to design an application that accesses the Internet without a browser application, it can be done with ActiveX controls. These are controls that you can create, by writing code like you do when creating other objects. Depending on what you’re using to develop the application, various ActiveX controls, which have the file extension of .OCX, are included with Visual Studio, Visual C++, Visual Basic, and so on. These controls are added like other controls (such as Textboxes and ListBoxes, for example) to a Form, which is a container object that appears as a window when your application runs. ActiveX controls that provide this functionality allow you to design standalone applications that run on a client machine, allowing users to exchange files and view documents on the Internet through your application.
In addition to standalone applications, you can write or add ActiveX controls to Web pages, which are documents written in HTML (HyperText Markup Language). By embedding such controls in your Web pages, any user with a compatible Web browser can utilize the control. Since ActiveX technologies are built on the Component Object Model (COM), the user must be using a COM compliant browser. If they are not, they won’t be able to use the control. When a user’s COM-enabled browser accesses the Web page, the Web server transmits the HTML document and your ActiveX controls to the user’s machine. The user’s Web browser then displays the page and runs the control.
In addition to using ActiveX controls, there are Active documents. While developed in Visual Basic, Active documents cannot be used as standalone programs. Like HTML documents, Active documents must be viewed and used through a container application like Internet Explorer. These documents are downloaded from a server to the browser. Through the browser users are able to view data through a point and click interface. You can incorporate menus, toolbars, and other methods of navigation and functionality through the Active documents you create.
While the methods we’ve discussed have mainly dealt with the client end, ISAPI (Internet Server Application Program Interface) applications can be created for Web servers in your organization. ISAPI is a set of calls that can be used in developing applications that run on ISAPI-compliant Internet servers. ISAPI applications are dynamic link library programs, with the file extension of .DLL, that load when your Web server starts. Users can submit requests to these applications through their Web browser, and view data that the application returns.
ISAPI applications are considerably faster than CGI (Common Gateway Interface) applications. While ISAPI applications are similar to those created with CGI, ISAPI applications load when the Web server starts, and remain in memory as long as needed. When a CGI application is run, each instance of the application loads as a separate process and in its own separate address space. This means that if ten people use the CGI application, ten instances of it will be loaded. Not only does this take up more memory than ISAPI, it’s also slower than ISAPI because of the numerous loading, unloading, and reading of extra instructions stored in separate address spaces.
Each of the technologies we’ve discussed here can also be used on a corporate intranet. An intranet is a network that uses the technologies of the Internet on a local area network (LAN), metropolitan area network (MAN), or a wide area network (WAN). The difference between these types of networks is the area they cover. A LAN may service a building or campus, a MAN would have buildings throughout a city networked together, while a WAN would have computers networked across a state, province, country, or internationally. Utilizing TCP/IP and the technologies we’ve discussed, an organization can have its users accessing data as if they were on the Internet. They would use Web browsers to receive HTML documents, Active documents, and use ActiveX controls.
Should corporations wish to allow people outside of the company to access the corporate Intranet remotely, the accessible portion of the Intranet is called an extranet. Using TCP/IP and a Web browser, clients, partners, and other users with proper permissions can access data. This broadens the benefits of the intranet, and makes necessary data available to those outside the organization.
OSI (Open Standards Interconnect) is a reference model used to describe how computers communicate on a network. The OSI model was developed by the International Organization for Standardization (ISO) as a conceptual framework that allows developers, network personnel, and others to understand network communication. By using OSI, you can develop products that have the ability to properly transfer data between your product and those created by other developers.
OSI breaks up the tasks involved in network communication into different layers. Rather than having to look at how an application transmits data as one large task, it breaks it up into sub-tasks. You can then use or design protocols that fulfill each of these subtasks by mapping them to the different layers of the OSI model. These layers consist of the following:
· Application, which is the layer that will be accessed by the network application you write. It isn’t the application itself, but provides services that support e-mail, database access, file transfers, and so forth. At this layer, you identify whom you’ll communicate with, define constraints on data syntax, and deal with issues of privacy and user authentication.
· Presentation, which is the translator of the OSI model, responsible for converting data from one presentation format to another. It translates data from a format that a network requires to one that your computer expects. The Presentation layer is responsible for converting character sets and protocols, and interpreting graphic commands. It is also responsible for the compression and encryption of data.
· Session, which is responsible for setting up and tearing down a connection between two computers; in other words, a session. This layer is responsible for name lookups and security issues. These are used to find another computer on the network, and determine how the two computers can connect to one another. The Session layer also synchronizes the data that’s sent, and provides checkpoints that are used to determine how much information has actually been sent. If the network fails, or the session is dropped for some reason, only data after the last checkpoint will be transmitted. This keeps an entire message from needing to be retransmitted.
· Transport, which is responsible for ensuring complete data transfers from one computer to another. The Transport layer ensures that packets of data are delivered without errors, with no losses or duplication, and in the proper sequence. When messages are passed down to this layer from the Session layer, it breaks large messages into smaller packets. When the receiving computer receives these packets, it reconstructs them in the proper sequence, then passes the message up to its Session layer.
· Network, which is responsible for routing and forwarding data to the correct destination. It ensures that when a packet is sent out on the network, it is sent in the correct direction to the correct machine on the network by determining the best route to take. It also breaks large packets into the smallest frames that the Data Link layer can accept. Frames are units of data that the Data Link layer passes onto the physical layer for transmission.
· Data Link, which is responsible for error control and synchronization. This layer breaks up packets into frames that are passed to the Physical layer for transmission. In doing so, it adds a Cyclic Redundancy Check (CRC) to the data frame, which is used to determine if a frame has been damaged in transmission. This layer also adds information to frame that identifies segmentation, and what type of frame it is.
· Physical, which is responsible for transmitting and receiving data in a bit format (a series of binary 1s and 0s). It is the physical connection to the network that deals with how data is sent and received along the carrier.
Exam Watch: Many people find it difficult trying to remember the different layers of the OSI model. A simple way to remember the layers is remembering the sentence “People Don’t Need To See Paula Abdul.” The first letter of each word is the first letter of each layer in the model (from bottom to top): Physical, Data Link, Network, Transport, Session, Presentation, and Application.
In looking at the different layers of the OSI model, you can see that the layers can be broken into two distinct groups. The top four layers of OSI are used for passing messages from and to the user. The remaining layers, which are the bottom three layers of the model, are used for passing messages from the host computer to other computers on the network. If an application needs to pass a message, such as an error message, to the user, only the top layers are used. If a message is intended for any computer other than the host computer, the bottom three layers are used to pass the message onto the network.
The way the OSI layers work can be seen in the example of a message being sent from a user on one computer to a user on the network. The transmitting user’s message starts at the Application layer, and passes down the different layers of the OSI model. Each layer adds its own special, related functions and breaks the message up into packets, which can be transmitted across the network. Except for the Physical layer, each of the layers adds a header, which the receiving end uses to process the message. Upon reaching the computer that belongs to the receiver of the message, the packets are passed back up the layers of the OSI model. The headers that were added by the sending computer are stripped off, and the message is reconstructed into its original format.
Exam Watch: OSI is a model, but not
an absolute and explicit standard. Although many models, protocols, and so forth
are based on the OSI model, a number don’t map to every layer. As a model, it
provides a conceptual framework that can be used to understand network
communication, but isn’t a standard that must be adhered to.
Many organizations that have been around for awhile still use their old mainframe computers. These enormous, archaic machines are used because they contain data that the company still uses for its business. The problem is that users working on NT Workstations, Windows 9x, and other platforms that use NT servers on the network need a method of accessing this data. While you could program special interfaces to access this data, an easier solution would be to use COMTI.
COMTI is a component of Microsoft Transaction Server, and runs on Windows NT Server. When an object method call is made for a program on a mainframe, COMTI works as a proxy for the mainframe. It intercepts the method call, converts and formats the parameters in that method, and redirects it to the mainframe program that needs to be used. The parameters need to be converted and formatted because mainframe programs are older and wouldn’t understand the formats used by newer operating systems like NT. When the mainframe program returns values and parameters to the NT Server, COMTI converts them to a format that NT can understand. These are then passed from the NT Server to whatever client machine made the initial request.
Because COMTI is a component of Microsoft Transaction Server, it follows that it would be used for transaction programs on mainframe computers. COMTI supports IBM’s Information Management System (IMS) and Customer Information Control System (CICS)—mainframe transaction programs. When using COMTI with MTS, you can coordinate transactions with CICS and IMS systems through MTS. Since all processing is done on the NT Server, and standard communication protocols are supported by COMTI, no additional protocols are required, and no code needs to be written on the mainframe.
Any client application you design that implements COMTI needs to be used on a platform that supports DCOM (Distributed Component Object Model). It doesn’t matter whether the language is written in Visual Basic, Visual C++, Visual J++, or any number of other languages. DCOM is language-independent, but the operating system needs to be new enough to support DCOM in your client application. This means you can’t use COMTI in applications that run on older systems, such as Windows 3.1 or older DOS versions. The client application needs to be running on NT Server, NT Workstation, Windows 9x, or other operating systems that support DCOM.
POSIX (Portable Operating System Interface) is a set of open system environment standards based on the system services of UNIX. UNIX is an operating system that was developed by Bell Labs in 1969, and was written in the language “B” (for Bell). When the next version of this language was developed, it was called “C,” and UNIX became the first operating system to be written in the new language. Since that time, it has evolved into the first open or standard operating system, and is used by many universities, colleges, Internet, and corporate servers. Because computer users wanted applications that were portable to other systems, without needing the code needing to be completely rewritten, POSIX was developed. Since it needed to be based on an open system that was manufacturer neutral (meaning royalties didn’t need to be paid), and one that was already popular (as an obscure system would cause problems), UNIX was chosen as the basis for POSIX. It gave developers a standardized set of system interfaces, testing methods, and more (as we’ll see next), which allowed applications to be used on multiple systems without being recoded.
· While developed by the IEEE (Institute of Electrical and Electronic Engineers) as a way of making applications portable across UNIX environments, POSIX isn’t limited to being used on UNIX computers. It can be used on computers that don’t use the UNIX operating system, and has evolved into a set of 12 different POSIX standards. Each of these is defined by the word POSIX followed by a decimal point system:
· POSIX.0, which isn’t a standard, but is a guide or overview of the other standards.
· POSIX.1, is the systems API (Application Program Interface). These are the basic operating system interfaces used in programming POSIX applications.
· POSIX.2, the IEEE approved standard for shells and tools.
· POSIX.3, defines the testing and verification standards.
· POSIX.4, the standard for real-time extensions and threads.
· POSIX.5, the ADA language bindings to POSIX.1.
· POSIX.6, for security extensions.
· POSIX.7, system administration standards.
· POSIX.8, the standards for application interfaces, networking, remote procedure calls, transparent file access, protocol-independent network interface, and open system interconnect protocol-dependent.
· POSIX.9, the FORTRAN language bindings to POSIX.1.
· POSIX.10, the application environment profile for super-computing.
· POSIX.11, the application environment profile for transaction processing.
· POSIX.12, standards for graphical user interfaces.
Exam Watch: The two main interfaces for POSIX are POSIX.1, which defines the API for operating systems, and POSIX.2, which sets the standards for shells, tools, and utilities. These two standards were incorporated into the X/Open programming guide, and are primary standards of POSIX. Of all the standards comprising POSIX, these are the two you should remember.
A proprietary technology is one that’s privately owned or controlled. When a company develops and/or owns a specific technology, it has control over how it is used. It can decide whether others can freely use the technology, which would make it an open technology. If it decides to make it proprietary, it can hold back specifications for that technology or charge a fee for its use. This keeps others from duplicating the product.
A major problem with proprietary technologies is that they prevent users from being able to mix and match that technology with other products on the market. If a customer purchases such a product from a developer or manufacturer, they are often stuck with having to work with or upgrade to other technologies from that company. While such products may solve a short-term problem, in the long term they are often a source of major difficulties.
It’s impossible to escape the fact that the technology environment of an organization is what will support your application’s requirements. If the current technology environment doesn’t support the requirements of the application, you have one of two options: upgrade the technical resources so they do support the application, or change your design so that it works with what’s already in place. To make this decision, you need to gather information on the current and future state of the environment.
Earlier in this chapter, we discussed a number of information-gathering techniques. These included interviews, JAD sessions, and so forth. By actively discussing the technology environment with people who have knowledge of hardware and software used in the company, you can quickly establish how your design will be affected. The people you would acquire this knowledge from include network administrators, management, team leaders from other projects, and so forth.
On the Job: In discussing the technology environment with network administrators and other individuals who know about the systems in use, you may find that the information you need has already been documented and up to date. This would provide you with the facts you need, and preclude having to do unnecessary work, because all of the information is already available.
Identifying the current state and deciding on the planned future state of technology in an organization should be done as early as possible. It’s important to determine the technical elements that will affect your design. This includes such things as the hardware platform to be used for the application, what database management system will be used, security technologies, and how data will be replicated. What operating system will be used, and can the features of that platform be used by your application? You’ll need to identify the technologies used to access remote databases, mainframes, and how requests will be handled. If such issues aren’t addressed early in the design, it can result in significant losses. Time, money, and effort will be wasted on developing an application for a technical architecture that isn’t supported.
Another benefit of determining the current and future state of the organization’s technology environment is that it helps to identify the skills that will be required to complete the project. If your application needs to implement EDI, Internet technologies, or access a mainframe, you can determine the skills required for your project, and decide on which people have those skills. This enables you to form a team with the necessary skills to make your project successful.
The foundation of where you’re going is where you are right now. You may decide that the application should obtain data from a database residing exclusively on an NT server, without realizing that all of the corporation’s data is on a mainframe computer. Even worse, you could spend time, money, and effort planning to implement a technology in the future, only to find that the technology is already in place in the organization. It’s important to understand the current state of the technical resources before planning changes. Not doing so can make or break the success of your project.
For the design of your application, you should document the current technology environment of the organization. This documentation should include as much detail, and show as much knowledge about the technology, as possible. Though most of the information will be written, you can include diagrams, charts, and physical environment usage scenarios to aid in understanding how the current technology environment is laid out.
In your documentation you should outline the development tools currently in use, the technologies used in current network applications, and how the network itself is designed. Protocols should also be documented. You don’t want to design an application that uses TCP/IP if the network uses IPX/SPX, and the network administrator refuses to implement that protocol. Much, if not all, of this information can be gathered through the information-gathering techniques we discussed earlier in this chapter.
From the information gathered on the current state of the technology environment, you’re able to see changes that need to take place to support the application you’re designing, as well as applications you plan to design later. It is through such planning that you take a hard look at where you are going with the technology environment, based on where you currently are. In doing so, you take the first steps toward developing a technical system architecture that can support the technical requirements of such business solutions.
It is important to document the future state of your technology environment. Such changes can be documented in a textual format, or listed in a chart form. In a chart, you could list the current state of the environment in one column, the change that needs to occur in the next, and note what those changes entail in the third column. For example, let’s say you wanted your current application to use 32-bit programming. If the current state of the environment is that workstations use Windows 3.11 (which uses 16-bit programs), then the planned state would be workstations using Windows 9x or NT Workstations. This would enable your team to use the latest development tools, create 32-bit programs, and use the latest APIs and technologies for that platform. This allows you to note where areas of the environment will remain the same, where changes need to be made, and what those changes to the environment will be.
In making such plans, you need to demonstrate that the future architecture will deliver at least one major business requirement to the environment of the organization. This may be meeting a business rule that determines how data is accessed, or such issues as maintainability, performance, or other important productivity issues. No matter how nice you ask for the latest advances in technology, a company will only see the bottom line; that is, will the benefits of these changes outweigh the costs involved? Your plan should reflect the merits of such changes, and how they will impact your application, the business requirements, and the productivity of the business as a whole.
In planning the future state of your technology environment, it’s important to not only look at how you want things to change, but also how the changes will affect technologies that will continue to exist in the future state. Unless you’re implementing a brand new system in your organization, a significant portion of the previous state will carry on. This could include such areas as security technologies, mainframes, and protocols. Ensure that you understand how your changes will affect these technologies, before they’re put into effect.
Suppose you have to design a 32-bit program. After designing the business solution, you inform your development team to use Visual Basic 3 as the development tool. You know that all of the developers know that language and don’t see a problem, until the application is completed. Unfortunately, the solution is a 16-bit application. Why? Because Visual Basic 4 was the first version of Visual Basic to offer 32-bit programming, and the last to offer 16-bit development. Oops. Not only would such a bad decision cause the solution to go back through development, but would also probably cost you your job.
Such a situation illustrates how important it is to select the right development tools to build your application. Your design may revolve around using Visual J++ when Visual Basic or Visual C++ is the better choice. Not only must you select the development tool that best suits your needs, but you also need to be aware of what that particular version of the tool offers. This means that upon choosing a tool, you must do some research on the different releases available to see if they’re compatible with your needs, and the environment those tools, and the programs developed with them, will be used in.
Unfortunately, in researching such development tools, you’re bound to face biases. Visual Basic programmers will prefer that language and tool over Visual C++; C++ programmers will may say the reverse; while Java programmers will may say Visual J++ is better than either of these. We will make no such recommendations. Each of the tools in Visual Studio 6 support COM and enable you to create reusable components that can be used with each of the other tools in this suite. Generally, the best choice of a development platform revolves around the following:
· The project Some tools are better suited to some projects than others, and should be decided on a project to project basis. This is due to the inherent strengths and weaknesses in the languages themselves. This, combined with the type or nature of the project, will make some tools more suitable for certain projects.
· The skills of the development team Developers will develop quicker if they use a language and tools they’re familiar with. This means the application can be built faster.
· Cost and schedule If there isn’t time to retrain developers on new tools, or the budget doesn’t allow it, you can’t send developers for retraining.
In short, you need to select development tools based on the project, the skills of the people involved in the project, and other resources available. By knowing the features and functionality of the various tools available, you can make such a determination.
On the Job: Selecting the proper tools must be done early in the design to allow time for retraining and/or familiarization with those tools. It’s not uncommon for developers to be sent out on a weekend or week-long class, and then be expected to master how to develop application with their new knowledge, in a new language, with new tools. No matter how good the training session, it takes time for developers to hone their new skills. By selecting the development tools early in the design phase, you provide your team with the time it takes to do this. By the time your solution is designed and ready for development, your development team will be ready to do what you need.
Microsoft Visual Basic 6 is based on the most commonly used programming language in computing: BASIC (Beginners All-Purpose Symbolic Instruction Code). Though based on the original language, it isn’t the BASIC you may have learned in high school programming. There are hundreds of keywords, functions, and statements for the Windows graphical user interface (GUI). In addition, you don’t write code through a console or text-based interface. Visual Basic allows you to add pre-built and custom controls to Forms, which become Windows when the application is run. This WYSIWYG (What You See Is What You Get) method of designing an application’s interface makes programming significantly easier.
In addition to the comparative ease of learning this programming language, Visual Basic 6 comes with a number of features that allow applications to be developed rapidly. One such feature is the ability to create databases, front-end applications, and server-side components in various database formats. Such databases include Microsoft Access and SQL Server. Another feature of Visual Basic 6 (VB6) is the ability to create applications that access documents and applications across intranets or the Internet, and the ability to create Internet server applications, such as those used by Internet Information Server. Visual Basic 6 also includes ActiveX controls that you can add to forms to create applications quickly.
Perhaps the greatest strength
of using Visual Basic is that it uses a simplistic syntax, and that people who
know VBA (Visual Basic for Applications) can quickly be moved into the role of
Visual Basic developers. VBA is used in all of the applications included in the
latest versions of Microsoft Office, as well as other solutions put out by
Microsoft. Because of this, you may already have an installed base of developers
who can migrate from VBA to VB. Due to the simple syntax, Visual Basic also
servers as an excellent language for new developers. It is easier to learn than
many other languages out there, such as Java or C++, and can be used for the
rapid development of solutions.
In addition, VB6 provides a number of Wizards, which can be used in creating or migrating an element that can be used in the applications you create. Wizard programs take you step-by-step through the process of a particular task—such as creating toolbars, property pages, or even applications—and result in a completed object or product when finished. These can then be built upon by adding additional code. The Wizards provided in Visual Basic 6 include those shown in Table 9-1.
Wizard |
Description |
Application Wizard |
Used to create functional applications that you can build upon later. |
Data Form Wizard |
Used for generating VB forms from information obtained from the tables and queries of your database. The generated forms include controls and procedures, which you can build upon by adding additional code. |
Data Object Wizard |
Used for creating the code used by COM data objects and controls that display and manipulate data. |
ActiveX Control Interface Wizard |
Used to create public interfaces for the user interfaces you’ve created for ActiveX controls |
ActiveX Document Migration Wizard |
Used for converting existing forms into ActiveX documents. |
ToolBar Wizard |
Used for creating custom toolbars. This is new to VB6. |
Property Page Wizard |
Used for creating property pages for user controls. |
Add-in Designer |
Used for specifying properties of add-ins. This is a new addition to VB6. |
Class-builder utility |
Used for building the hierarchy of classes and collections in your project. |
Package and Deployment Wizard |
Used for creating setup programs for your applications, and distributing them. This is new to VB6, though it is based on the previous Setup Wizard included with previous versions. |
Table 1: Wizards Included in Visual Basic 6
Though a more difficult language to learn for beginners, Visual C++ is incredibly powerful for creating all sorts of applications. Visual C++ is based on the C++ language (which has its origins in the languages of “B” and “C”), and is used to create many of the applications you’ve used in Windows. Like Visual Basic, Visual C++ provides a visual GUI that allows you to add pre-built and custom ActiveX controls in a WYSIWYG manner. Code is then added to these controls, enabling users to view and manipulate data, or access whatever functionality you decide to include in your programs.
The difficulty involved in
learning this language and the power of developing with Visual C++ can server as
serious drawbacks to developing with C++. It uses complex syntax, which can make
Visual C++ difficult to learn and use. Even when developers have backgrounds in
other languages like Visual Basic, learning this language can be problematic.
This, and the very power it provides, can and generally does lead to slower
development times. You should attempt to determine whether your solution will
actually need the power that C++ provides, as it may be overkill for some
projects.
Also like Visual Basic, Visual C++ provides a number of Wizards that can easily be used to accelerate the speed of development. These Wizards are straightforward to use, and take you through each step in the process of a particular task. The Wizards provided in Visual C++ include those shown in Table 9-2.
Wizard |
Description |
ATL COM AppWizard |
Used to create Active Template Library (ATL) applications |
Custom AppWizard |
Used for creating custom project types, which can be added to the list of available types. |
MFC AppWizard |
Used to create a suite of source files and resource files. These files are based on classes in the Microsoft Foundation Class (MFC) library. C++ includes two versions of this Wizard: one is used to create MFC executable programs, while the other creates MFC dynamic link libraries (DLL). |
Win32 Application |
Used to create Win32 applications, which use calls to the Win32 API instead of MFC classes. |
Win32 Dynamic Link Library |
Used to create Win32 DLLs, which use calls to the Win32 API instead of MFC classes. |
Win32 Console Application |
Used to create console applications. These programs use the Console API so that character-mode support is provided in console windows. |
Win32 Static Library |
Used to create static libraries for your application. |
MFC ActiveX ControlWizard |
Used to create ActiveX controls. |
DevStudio Add-in Wizard |
Used for creating add-ins (in-process COM components) to automate development tasks. The add-ins are dynamic link library (.dll) files, which are written in Visual C++ or Visual Basic. |
ISAPI Extension (Internet Server API) Wizard |
Used to create ISAPI (Internet Server Application Programming Interface) extensions or filters. |
Makefile |
Used to create MAKEFILE projects. |
Utility Project |
Used to create utility projects. |
Table 2: Wizards Included in Visual C++ 6
If your development team consists of a group of Java programmers, you’ll probably want to go with Visual J++. In terms of complexity and power, J++ generally falls between Visual Basic and C++. Using this development environment, you can create, modify, build, run, debug, and package Java applications for use on the Internet, your corporate intranet, or as Windows applications. While the Java language is most closely associated with the Internet, this doesn’t mean you can’t create applications used on a Windows platform that doesn’t have access to the Internet. Visual J++ 6 uses the Windows Foundation Classes for Java (WFC), which enables programmers to access the Microsoft Windows API. Through WFC you can create full Windows applications in the Java language.
Like the other development tools we’ve discussed, you can create applications with Visual J++ through a GUI interface. By adding various controls to a form and then assigning code to those controls, you can rapidly develop applications. In addition, you can use the various Wizards included with Visual J++ (like those shown in Table 9-3) to quickly develop the applications you design.
Wizard |
Description |
Application Wizard |
Used to create functional applications that you can build upon later. |
Data Form Wizard |
Used for generating forms from information obtained from a specified database. The controls on the form are automatically bound to the fields of that database. This includes Microsoft Access databases, and those accessable through Open Database Connectivity (ODBC). |
WFC Component Builder |
Used to add properties and events to WFC components. |
J/ Direct Call Builder |
Used to insert Java definitions for Win32 API functions into your code. In doing so, the appropriate @dll.import tags are also added to your code. |
Table 3: Wizards Included in Visual J++ 6
Web designers and Web application developers will get the most benefit from Visual InterDev. This development tool provides a number of features including design-time controls, and (as seen in Table 9-4) wizards to aid in creating such applications. As the second version produced of Visual InterDev, Version 6 is the first version to have a WYSIWYG page editor, which allows you to add controls to your applications just like the other development tools we’ve discussed. Rather than having to know how to raw code cascading style sheets (CSS), or possess a knowledge of HTML, Visual InterDev includes an editor that allows you to create and edit cascading style sheets (CSS). It also offers tools that allow you to integrate databases with the Web applications you create.
Wizard |
Description |
Sample Application Wizard |
Used to install sample application from the Visual InterDev Gallery and third-party Web applications. |
Web Application Wizard |
Used to create new Web projects, new Web applications, and connect to existing Web applications. |
Table 4: Wizards Included in Visual InterDev 6
Visual InterDev 6 provides a
number of powerful features that can be used when creating Web applications and
other distributed applications. First and foremost, the Data View window in
Visual InterDev 6 enables you to launch tools to manage your database, and
enables you with a live view of data. In addition, the Quick View tab in Visual
InterDev provides instant page rendering without the need to save, while color
coding of script allows a clearer way of viewing the script developers are
writing or have previously written. Visual InterDev 6 also has debugging
support, enabling developers to find problems in their code before it’s passed
forward into testing or use.
Visual FoxPro is a development tool that enables you to create robust database applications. Using Visual FoxPro, you can create databases, queries, tables, set relationships, and interfaces that your users can use to view and modify data. It includes a Component Gallery, which is used as a central location for grouping and organizing objects like forms, class libraries, and so forth. Included in the Component Gallery is the Visual FoxPro Foundation Classes, which are database development tools and structures, components, wrappers and so forth, that allow you to quickly develop applications without having to rewrite incredible amounts of code. Visual FoxPro also includes the Application Builder, which enables developers to add, modify, and remove tables, forms, and reports quickly. Finally, the Application Framework feature of Visual FoxPro provides common objects used in applications. Together, they are the means to creating database applications rapidly. Visual FoxPro wizards are described in Table 9-5.
Wizard |
Description |
Application Wizard |
Used to create projects and a Visual FoxPro Application Framework. When this is used, it will automatically open the Application Builder, which allows you to add a database, tables, reports, and forms. This is a new Wizard to Visual FoxPro. |
Connection Wizard |
Used for managing transfers between the Visual FoxPro class libraries and models created in Microsoft Visual Modeler. It includes a Code Generation Wizard and a Reverse Engineering Wizard as part of it. This is a new Wizard to Visual FoxPro. |
Database Wizard |
Used to create databases. This is a new Wizard to Visual FoxPro. |
Table Wizard |
Used to create tables. |
Pivot Table Wizard |
Used to create pivot tables. |
Form Wizard |
Used to create data entry forms from a specified table. |
One-to-Many Form Wizard |
Used to create data entry forms from two related tables. |
Report Wizard |
Used to create reports. |
One-to-Many Report Wizard |
Used to create reports. The records in these reports are grouped from a parent table with records in a child table. |
Query Wizard |
Used to create queries. |
Cross-tab Wizard |
Used to create cross-tab queries, and display results of such queries in a spreadsheet. |
Import Wizard |
Used to import data from other from other files into Visual FoxPro. |
Setup Wizard |
Used to create a setup program for your application. |
Web Publishing Wizard |
Used to display data in an HTML document. This is a new Wizard to Visual FoxPro. |
Graph Wizard |
Used to create graphs from tables in Visual FoxPro. It does this by using Microsoft Graph. |
Label Wizard |
Used to create labels from tables. |
Mail Merge Wizard |
Used to merge data into a Microsoft Word document or a text file. |
Views Wizard |
Used to create views. |
Remote View Wizard |
Used to create views using ODBC remote data. |
SQL Server Upsizing Wizard |
Used to create SQL Server databases that have similar functionality to Visual FoxPro databases. |
Oracle Upsizing Wizard |
Used to create Oracle databases that have similar functionality to Visual FoxPro databases. |
Documenting Wizard |
Used to create formatted text files from your project’s code. |
Sample Wizard |
Demonstration of a Wizard. This is new to Visual FoxPro. |
Table 5: Wizards Included in Visual FoxPro 6
There are many types of solutions you can create for a network, that go beyond the capabilities and limitations of standalone applications. Standalone applications run completely on a single computer, and don’t require a connection to a network. When your application does need to access objects, components and/or data on other computers, then the design of your solution must be expanded to become one of the following:
· Enterprise solution
· Distributed solution
· Centralized solution
· Collaborative solution
In the sections that follow, we’ll discuss each of these solution types. The type of solution you choose for your design will be determined by the number of users accessing data and using your application, where the data source and application will be located on your network, and how it is accessed.
Once organizations began to interconnect their LANs together, the need for solutions that could effectively run on these larger systems was needed. That’s where enterprise solutions came into play. They’re called enterprise solutions to cover the numerous types of organizations, large and small businesses, charities, and so forth, that use computers, and have a need for solutions that can handle the possible hundreds or thousands or users that will use these applications. In addition, hundreds or thousands of requirements may need to be addressed by the application, as users use the application and access data on the server application. Because of these and other factors, designing and developing enterprise solutions can be incredibly complex.
While there are many enterprise solutions in the real world, each of them usually has the following in common:
· Size
· They’re business oriented
· They’re mission critical
To understand each of these attributes, and how it affects an enterprise application, we’ll discuss each of them individually.
Enterprise solutions are generally large in size. They’re spread across multiple computers, and can be used by numerous users simultaneously. Designing solutions for the enterprise requires an expanded scope of knowledge to deal with a larger environment. While the different models of MSF are used in a similar manner, enterprise solutions mean that these models used to be used on a grander scale. This means there is an overwhelming need to use a good framework, such as MSF. It would be impossible to tackle such a project on your own, without recruiting people into the various roles of the Team model. Teams are organized to take on the various tasks involved in developing, testing, and so forth using the Team model.
Teams of developers will generally create such an application, keeping in mind that each part of the application they code will be used by multiple users on multiple machines. Parts of the enterprise application will reside on different computers, distributed across the network. ActiveX components can be used with this, allowing the software to communicate and interact across the network.
To say that an enterprise solution is business oriented means that it’s required to meet the business requirements of the organization for which it’s created. Enterprise solutions are encoded with the policies, rules, entities, and processes of the business, and they’re deployed to meet that business’s needs. Any enterprise solution you design must function in accordance with the practices and procedures of the enterprise.
In saying that an enterprise solution is mission critical, it means that the application you’re creating is vital to the operation of that organization. For example, insurance companies use applications that store policy information in a database. If this application stopped working, none of the insurance agents could input new policy information or process existing policies. This affects the operation of the enterprise. As such, enterprise applications need to be robust, so they are able to function in unexpected situations, and thereby sustain continuous operation. They must also allow for scalability, which means they can be expanded to meet future needs, and have the capability to be maintained and administered.
Distributed solutions are applications where the objects, components, and/or data comprising the solution are located on different computers. These computers are connected through the network, which thereby connects the elements of the application to one another. For example, a user’s computer would have the user interface portion of the solution, with the functionality to access a component containing business rules on an application server. This application server’s component might contain security features, and determine how the user accesses data. By looking at the user’s account, it would determine if the user could access data residing on a third computer, the database server. Because each computer has a different portion of the application, each computer also shares in processing. One CPU handles the user interface, the CPU on the second computer handles processing business rules, while the third processes data. This increases performance, and reduces the amount of resources required by each individual computer.
Data can be spread across a distributed application in different ways. If your solution is to be used by numerous users, you can create an application with the user interface on the user’s computer and the database program on a server. This is called a client-server database. The computer running the user interface is called the front-end or client portion of a distributed solution. The client (front-end) makes requests from the server, which is called the back end of the distributed solution. Typically the front-end or client will run on a personal computer (PC) or workstation, although if the user is working on a server as if it were a workstation, it can run on a server computer. In either case, the client makes a request to view or manipulate data from the server, which processes the request on behalf of the client. This back-end portion of your distributed solution can service such requests from many users at the same time, manipulate the data, and return the results to the client portion on the requesting user’s machine. Because numerous users are taken care of, the server requires a significant amount of storage space for the data files, as well as enough processing power to keep performance from suffering.
If such a distributed solution does degrade in performance from too many users, you can consider creating a distributed database. In such a solution, you create two databases that use the same data on two different servers. You then set up these databases to synchronize their data, having one database replicate its changes in the other at specific times. This is often useful when geographical reasons dictate the need to distribute a database, as might be the case if two branch offices opened in different countries, and needed to use the same data. Since it would be expensive and slow to have users connect to a single database using a WAN, you would set up a distributed solution with distributed databases. Users in each location would use the database closest to them, and these two databases would then synchronize with one another.
If a database isn’t heavily accessed, or is used by a small number of users, you can create a distributed application where the data files are located on a server and the database application is located on the client. This is the kind of application you could create with Microsoft Access. The database program manipulates files directly, and must contend and cooperate with the other users who are accessing that particular data file. Because of this, if numerous users are accessing the same data file or files located on the same server simultaneously, the network can become bogged down with traffic, and the database’s performance can suffer. In addition, the server will require significant storage space to contain the databases users will access.
A centralized solution is an application in which everything resides on a single computer, and is accessed by users on smaller, less powerful systems. Centralized solutions are associated with mainframes, which are accessed by terminals or terminal emulation programs. Such terminals can be smart or dumb terminals. Dumb terminals have no processing power, and are an output device that accepts input and data and displays it. Smart terminals do have processors, and are considerably faster than their dumb counterparts. The terminal emulation programs allow more powerful computers, such as PCs, to fool the mainframe into thinking that it’s a terminal, allowing the computer to access data from the mainframe remotely. When such requests for information are made, they are sent over the network or communication lines to the mainframe. The centralized solution then retrieves the data, with all processing taking place on the mainframe. It then transmits the data back to the requesting computer, and displays it on the terminal.
Over the last number of years, a growing need in business has been the ability for people to work together with computers. When one person has completed (or is nearing completion) of his or her work, it must be passed onto another worker in the company. This collaboration between the two employees requires that work each does must be consistent and able to mesh with one another. In the case of several designers working on different components of an application, the different designs would need to work together. This means that as each portion of work is added together to form the complete design of the solution, each person’s design would have to mesh together or errors would result. In addition, information may need to be transferred from one individual to another in a particular order, following a specific workflow. The first person may need to send his or her work to another designer, who in turn puts the different designs together into a complete design, and then sends it off to the team leader for analysis and approval.
This is where collaborative solutions are necessary in an organization. Collaborative solutions are also known as groupware, and allow groups of people to share data, and interact with one another. Collaborative solutions fall into one of two categories, with the first category being a requirement of the second:
Collaborative solutions that allow users to share data with one another, or enhance data sharing
Collaborative solutions that allow users to interact with one another from different computers.
The first of these types of collaborative solutions allows two or more users to share data, and can include such things as linked worksheets or documents, Internet, or Internet applications that display dynamic Web pages. The second type of solution enables users to interact with one another, and could include such features as providing e-mail, Internet Relay Chat (IRC), or other methods of communication in an application. No matter which type of collaborative solution you design, it’s important to remember that groups of people will be sharing data and interacting with one another. At no time, should the user feel isolated from other members of their team, and they should always be able to benefit from collaborating with others in their work.
In designing collaborative solutions, you need to determine how the flow of information needs to move through the company. This can be determined by creating workflow diagrams and usage scenarios. In addition, you need to establish whether work needs to be sent only through a specified workflow, or if users will need to interact with one another as well. This will enable you to determine what features to include in your collaborative solution.
Since no one wants a database application that doesn’t store data, choosing a data storage architecture is an important issue. Data Architecture addresses the flow of data through every stage of the data cycle: acquisition and verification, storage and maintenance, and retrieval. Not only does the data storage architecture address what database management system (DBMS) will be used, it also addresses issues dealing with the effective storage of data. This includes such things as volume considerations, the number of transactions that take place over a certain time period, how many simultaneous connections to the database will occur, and so forth. These and other elements of the data storage architecture will affect the overall performance and effectiveness of your database application.
An importation consideration in determining data storage architecture is the amount of information to be saved, and how large your database will become. Different databases have different capacities, and can accept greater and smaller amounts of data. For example, SQL Server has a capacity of over 1 Terabyte per database. In contrast, Access or Jet databases (.mdb) have a capacity of 1 Gigabyte per database, but because you can link tables in your database to other files, the total size is limited only by the available storage capacity. The volume of information to be saved to your database should be estimated before deciding on the type of database you’ll use. If you fail to look at how much data may fill the database, you may find that the storage capacity of a particular database is insufficient for your needs.
If you’re creating a database application to replace an existing one, you should look at how much information is being stored in the database by each user. This will give you an effective measure of the volume of data to be stored in the new database. You should also take into account growth factors in the number of users who will be saving data. For example, if 100 users were each saving 1MB of data a month, and there are plans to hire 20 more people for data entry over the next three years, you should figure that 120 MB of data will be saved monthly (120 users x 1 MB / month). By looking at the current trends of data storage, you’re able to determine the volume of data to be stored in your future database.
If you’re creating a new database application, and information on the volume of data being stored isn’t available, it can be considerably more difficult to determine the storage needs of your customer. This is where it becomes important to look at the usage scenarios, and look at the kinds of data to be stored, as well as find out how many users will be accessing the database. By seeing what the database will be used for, the type of information to be stored, and the number of users, you can then determine the volume of information to be stored.
A transaction is a group of programming statements that is processed as a single action. To illustrate this, let’s say you were creating an application for an Automated Teller Machine (ATM). If a user were to make a withdrawal from his or her account, the transaction would probably start with checking to see if there was enough money to withdraw from the account. If there was enough money, then the transaction would continue to adjust the user’s account, and give the user the proper amount of cash. Each of these would be carried out as a single action, a transaction.
Different organization will have a different number of transactions carried out over certain time periods. While the ATM of a small bank in the middle of no where may have a few dozen transactions carried out in a day, a single machine in the middle of New York City could easily carry out thousands of transactions in the same time period. While the first of these could use a Visual FoxPro or an Access/Jet database to handle such activity, the second would be overwhelmed if such a database were used. However, SQL Server databases, or an application working with Microsoft Transaction Server, would be able to handle such heavy activity.
SQL Server databases can handle a high volume of transactions, and should always be considered when creating applications that use mission-critical transactions. An example of this is our ATM application. If the network were to go down in the middle of a transaction, you wouldn’t want the amount of a withdrawal deducted before the user got his or her money. This would cause problems for not only the user of this application but also for the bank. SQL Server logs every transaction, such that if the system fails, it automatically completes all committed changes while rolling back changes that are uncommitted. Visual FoxPro and Microsoft Access/Jet databases don’t have the ability to automatically recover after a system failure like this, meaning that data is lost if the system goes down.
It is important to determine the average number of transactions that will occur over a specific time increment for each project on which you work. This number will generally be different for each project you work on, and where it is used. It’s important not to use figures gathered from other projects, as these may be incorrect for what you can expect from the current project. Also, if there are several locations using a previous version of the database application, you should gather information on the number of transactions that occur over a specific time increment, and then average them. This will give you an accurate picture of what to expect.
For your database application to work, it needs to connect to a specific database for a period of time. The period between the time that a user connects with a database and the time that the connection is released is called a session. In other words, when the user logs onto the database, or the application connects to the database on the user’s behalf, the session starts. When the user logs off, the session has ended, and the connection is released.
With desktop applications, you can assume that only one session will be used with a database, unless more than one instance of the application is open at that time. With network applications, it isn’t that simple. You may have dozens, hundreds, or even thousands of simultaneous connections to a single database at any given time. Therefore, you need to consider how many sessions will be required when designing your application.
Different databases provide a
different number of sessions or user connections to occur at the same time. For
example, Access databases and those that use the Jet database engine (.mdb)
allow 255 concurrent users to be connected to a single database. SQL Server 6.5,
however, allows a maximum of 32,767 simultaneous connections. This is a
theoretical limit, and the actual number of connections is dependent on the
resources (such as RAM) on the server running SQL Server. As you can see, more
powerful databases provide a greater number of sessions.
In determining the number of sessions available, you should also consider the effect it will have on resources and performance. Each session open on a database server takes up memory. For example, while SQL Server’s maximum simultaneous connections is 32, 767, this value may be less. This is because the number of connections available is based on available memory and application requirements. The reason this is so is because each user connection takes up memory. For example, in SQL Server 6.5, each user connection takes up 37 KB, while SQL Server 7 incurs an overhead of about 40 KB. This increases the amount of overhead, and decreases the amount of memory that can be used for buffering data and caching procedures. Whatever computer you designate as the database server should have a considerable amount of memory to ensure that enough connections can be made.
In choosing a data storage architecture for your solution, it is fundamental that the scope of business requirements is given paramount consideration. The requirements of the business should always determine the data storage architecture. Because the application is driven by the business requirements as well, it follows suit that this will help in making the data storage architecture also meet the requirements of the application.
Extensibility is the ability to extend the capabilities that were originally implemented. This includes such things as extended feature sets, or support for ActiveX controls or Automation. The extensibility of the data architecture you use should always be considered, so that the database won’t need to be completely scraped when new features are required in the future.
Microsoft Access has support for ActiveX controls. This includes ActiveX controls that bind to a row of data. If the control binds to more than one row of data, Access will not support it. It also won’t support ActiveX controls that act as containers for other objects.
Microsoft Access also has the ability to control Automation servers (formerly known as OLE Automation Servers or OLE Servers). An Automation server is an application that exposes its functionality, allowing it to be used and reused by other applications through Automation. Access has the ability to control such servers because it’s an Automation server itself. This means that if your application, even an Internet application, needed to use the functionality of Access, you could control access through your program with Automation. This extends the capabilities of Access to other programs.
Unlike Microsoft Access, Visual Basic (VB) and Visual FoxPro (VF) allow you to create custom Automation servers. This is because both VB and VF allow a greater degree of programming ability, allowing you to create your own applications that can use or be used by other applications.
Visual Basic ships with the Jet database engine, which is
what Microsoft Access and Microsoft Office use. This
isn’t to say that VB comes with a free copy of Access, just that it uses the
same database engine, and that Visual Basic provides the ability to create
databases. Admittedly, these databases are less sophisticated than those created
with Access or FoxPro. In addition, there are major differences in the
extensibility of database applications created with Access and VB. Like Access,
Visual Basic 6 and Visual FoxPro 6 support ActiveX controls. However, each of
these supports controls that act as containers and those that bind to one or
more rows of data.
Reports are a common need of users, requiring data to be printed or displayed in a specific format. Microsoft Access and Visual FoxPro have offered wizards in current and previous versions to make the creation of reports easy. The Professional and Enterprise versions of Visual Basic 6 now includes a report writer, which has a number of features, including the drag-and-drop functionality of creating reports from fields in the Data Environment designer. Each of these products also has the ability to create reports in an HTML format, allowing you to post your reports to the Web.
SQL Server 7 has full integration with Microsoft Office 2000, and as such, enables users of SQL to use the reporting tools in this version of Microsoft Access. In addition, you can use office Web components to distribute any reports you create. Because SQL Server 7 is the first database with integrated Online Analytical Processing (OLAP), it is the best database to select if your customer requires this type of corporate reporting.
The number of users who will use the database storage architecture will have a dramatic effect on the type of database storage you use. As mentioned earlier, Microsoft Access and Jet databases can handle up to 255 concurrent users. SQL Server can handle up to 32,767 simultaneous user connections but (as previously stated) resources available on the system running SQL Server determine this maximum. Visual FoxPro doesn’t have a specific limit, but is determined by the system’s resources. Despite this, it shouldn’t be used for a large number of users, as would be the case for SQL Server.
If a large number of users are expected to be using the database, you should always consider using SQL Server. On a system with good hardware, supporting a small number of concurrent users, it is doubtful that anyone would notice any performance issues. In such a case, unless the developer wanted to take advantage of SQL Server’s extended functionality, you could use Access, a Jet database created with Visual Basic, or a Visual FoxPro database. However, as the number of users grow, and performance decreases, it is wise to migrate to SQL Server. If you expect a large number of users to begin with, then SQL Server is the data storage architecture of choice.
As we’ve seen, different database management programs have different capabilities and limitations that must be considered. Even though your design and application code may be flawless, if the wrong database is used, everything can fall apart. Because of this, it’s important to know the functionality offered by the database types available.
In determining the type of database to use, cost should be a consideration. Microsoft Access and Visual FoxPro are relatively inexpensive. The same can be said for Visual Basic, which allows you to create Jet databases for use with your applications. However, SQL Server is considerably more expensive, and may not be affordable to use as part of a database application for a small business. For such smaller businesses, Small Business Server would be a wiser choice. In addition, while SQL outperforms each of these other databases, little performance value will be noticed if only a small number of users will be accessing the database. In such cases, SQL would be overkill.
Though we’ve discussed how different types of databases should be used in different situations, Table 9-6 shows some of the important differences in the database products on the market. While these are just a few of the attributes of each database type, this table will enable you to view the differences quickly before going into the exam.
Attribute |
Access/Jet (.mdb) |
Visual FoxPro 6 |
SQL Server 7 |
Capacity |
1 GB. However, because you can link tables in your database to other files, this means the total size is only limited by the available storage capacity. |
2 GB per table |
1,048,516 TB per database |
Number of concurrent users |
255 |
Unlimited |
32,767 user connections |
Table 6:
Information on Different Databases
Exercise 9-2 tests your knowledge in indentifying different types of solutions.
Exercise 9-2: Identifying different types of solutions
Supply the name of the type of environment in the descriptive passages below.
1. A publishing company hired a group of writers to write various sections of a book. This book reflects a wide range of technical expertise. Each writer is considered an expert in their professional area. They are physically located all over the world. Because the publishing company wants to ensure the text flows well from one topic to the next, it provides the applications that allow video conferencing, meeting tools such as electronic whiteboards, and a text delivery mechanism.
2. ABC, Inc. a publisher of children’s education books, has two Novell Netware LANs and four NT domains. One of the Netware LANs is in the sales department that takes phone orders for books. They also have an Internet presence with an on-line store. Their sales application consists of multiple user interfaces and three major components: customer, order, and product. SQL Server is the database server. The sales application is considered a mission-critical application.
3. A small government agency has a bridge to a mainframe computer that hosts its data and all of its applications. The applications are all written as a text-based or console application. Each worker has a personal computer on their desk. These computers are licensed to run an office application suite and a terminal emulation application. Most of the workers invoke the terminal emulation program to connect to the mainframe and run the mainframe applications.
4. Fun ‘n’ Games, Inc. manufactures toys for the preschool market. This organization has a multi-Domain NT network with a major accounting and manufacturing application. It also has a mainframe that hosts its data. The purchasing department has an application that consists of a vendor object and a product object. Because it has multiple factory locations across the continental United States, the purchasing department wanted to centralize the purchasing duties. The corporate offices have a user interface written in Visual Basic that captures data on the mainframe. The business rules for the vendor object runs on a dedicated application server while the product object runs on the server that also hosts the vendor’s electronic catalogues.
Once you’ve determined what your technical architecture should be, it is time to test whether that architecture is feasible. This is important because while a proposed technical architecture may seem well planned at face value, deeper analysis may show that it has the potential to cause your project to fail. In testing your architecture, you need to show that business requirements, usage scenarios (use case scenarios), and existing technology constraints are all met. If testing shows that these areas aren’t met, or fall short in some areas, then you must assess the potential damage of these shortfalls. In other words, will it make a difference or cause the project to fail?
Because the design of your project is conceptual at this point, there are no actual applications or databases that you use in your testing. Prototypes or test applications can be developed, allowing Testing to have something concrete to work with. While prototypes don’t provide the functionality of the actual application, and give a visual demonstration of what the GUI will look like, they may be beneficial in seeing whether certain requirements have been met. In Chapter 11, we’ll go into prototypes in greater detail.
The primary method of testing whether requirements have been met is to go through the various documentation that has been generated to this point. This includes the business and user requirements that have been outlined in the vision/scope document, usage scenarios, and documentation on current technology. By comparing these to your technical architecture, you can determine whether the requirements have been met.
Exam Watch: On the exam you’ll be given a case study with proposed solutions, and a series of choices. This is not a memory test. You can refer to the case study as many times as you wish.
The requirements of the business are what drive application design and development, and it’s extremely important that these requirements are met in your design. The business requirements are the reason that you’ve been hired to design the solution in the first place. As such, you want to be able to demonstrate that these requirements have been given the attention they deserve, and that they are addressed in the design.
By going through each of the requirements outlined by the business, and documenting where they are addressed in your design, you are able to show that the various requirements have either been met, fall short, or haven’t been addressed at all. Basically this is simply a matter of going through the requirements, and checking each one off that has been addressed in the design. Documentation should be made as to where or how the requirement has been met, so team members can easily refer to where the requirement is met. As we’ll see later in this chapter, if these requirements haven’t been addressed, it’s important to either revise the design so they are addressed, or have a good reason why they can’t be included.
User case scenarios or usage scenarios are a vital tool in the design of your application, and show how the application will be used. By comparing the design of your technical architecture to these scenarios, you can determine if the solution actually follows the way certain activities must be carried out. For example, if you’ve created a collaborative application, you can view whether the design follows the way in which work is actually performed in the office. You can see whether the solution you’ve planned to create will actually serve the way users do their work in the organization.
It’s important to determine whether existing technology constraints are met early in the design process, so that technologies aren’t incorporated into the design that won’t work in the current environment. For example, if an organization had a network that didn’t use TCP/IP, had no connection to the Internet, and no intention of implementing an intranet, then using Internet technologies as part of your design would be pointless. If every user in the organization ran Windows 3.1, then you’d be limited to 16-bit applications, as 32-bit applications wouldn’t run on this platform. As you can see, the existing technologies used in an organization have a great impact on how the application can be designed.
It’s important to know as much as possible about the current technology constraints, so that you can show that they have been addressed in your design. This allows you to go through the design of your application, and identify areas of the design that will fail to work or have poor performance with these constraints.
If certain requirements aren’t met, or fall short of what was previously expected, it is important to assess the impact this will have on the project’s success. The requirements for your solution will have varying degrees of importance. While some may be a minor inconvenience, or may be added to the solution in future versions, others may cripple your design and cause it to automatically fail.
While business requirements drive the design and development of a solution, some requirements may be more important than others to the customer. If a business requirement is mission-critical, then the organization can’t function without that particular requirement being met. This means that mission-critical requirements that aren’t met will cause your project to fail. Other requirements may be more flexible, and can be dropped from a feature set until a later time. For example, if the customer wanted the ability to connect to a corporate intranet through your solution, but no intranet currently existed, this could be included in a future version. Because of the varying impact of shortfalls in meeting business requirements, it’s important to access the potential damage to your project early. By holding meetings with customers and users of your solution, you can often determine this quickly.
Because usage scenarios show the way that certain activities get done, failing to meet a usage scenario may keep users from performing their duties. Imagine designing a solution for a video store, and failing to meet the usage scenario that details the rental of movies. Needless to say, such a shortfall would be devastating. While this is an extreme example, such things as technology constraints may keep your design from fulfilling a usage scenario. There may be a need to provide a workaround, another method of performing the task, until it becomes technically possible to fulfill the scenario.
If a solution fails to meet the technology constraints of an application, you need to determine whether the technology needs to be updated, or your design needs to be revised. If a business requirement is mission-critical, and can’t be implemented without upgrading the current environment, then the organization will need to know that the solution will fail without these changes. If a business requirement isn’t mission-critical, then the customer will need to choose between dropping that particular feature, or upgrading a current technology.
The sooner you develop an appropriate deployment strategy, the better prepared you’ll be when the time comes to actually deploy your application. There are numerous ways available today to deploy an application. These include floppy disk, CD-ROM, over the local network from a server containing the necessary setup files, or over the Internet or a corporate intranet. How you choose to deploy your solution will depend on who is installing the solution, what methods are available for use by your users, and which are feasible for the organization.
In most cases, it is the end user who will obtain and install a copy of your application on his or her computer. While end users have a great deal of knowledge on the day-to-day functionality of applications, their experience with computers, networks, and installing applications can vary greatly. Therefore, it is important to offer instructions in simple, plain language, so no technical jargon will confuse them.
In many organizations, a network administrator or support staff will aid the user with installing the solution. Either they will completely install the application on their computers for them, or they’ll provide assistance when called upon. In these cases, you should provide a knowledge base or additional information on how they can support users through the installation. In addition, these people are the ones who are usually responsible for installing server-side components or back-end applications that work with the end-users’ front-end applications. It is important to provide them with detailed instructions on how to perform these tasks.
Floppy disk deployment is the oldest method of deploying a solution, but has been overshadowed in popularity in the last few years by CD-ROMs. Often, new developers will consider floppy disk deployment to be a waste of time. Large installations may take numerous disks, and significantly more time to install as the user must labor through the process of switching one disk for another through the installation process. However, this method of deployment shouldn’t be completely discounted. There are users out there who don’t have CD-ROM drives on their computers, no network connections, and can use only this method of deployment. Organizations with older computers still in use, that don’t have CD-ROM drives installed, rely on floppy deployment. In such cases, they can’t afford to upgrade all of the computers in the organization with CD-ROM drives, or are slowly upgrading each of the systems over a period of time. In planning a deployment strategy, it’s important to consider these issues, and offer floppy disks as an alternative method of installation.
As mentioned, CD-ROMs have become the most popular
non-network method of deploying a solution. Most off-the-shelf applications you
buy today are on CD, and most computers sold today come with CD-ROMs as part of
the package. Despite this, CD-ROM deployment is a more
expensive method of deployment. This is because you need to purchase a special
device to create writeable CDs. While CD-ROM burners have drastically dropped in
price over the last few years, they are still somewhat pricey, about the price
of a large hard drive. In addition, writeable compact disks must be specially
purchased, so that you can write your installation files to this media. Despite
these issues, which will be of greater concern to smaller developers than larger
ones, or organizations with their own development staff, the benefits of CD-ROM
deployment are great.
You need to consider what percentage of your intended users has CD-ROM drives on their systems before using this method of deployment. In some cases, CD-ROM deployment may be your only method of deployment. There are a number of organizations that don’t allow users to have floppy disks on their machines, for fear the user may save critical information to the floppy and walk out with it. If users don’t have a network connection, then CD-ROM deployment will be your only available method.
If your network has a server that users of your application have access to, then you should consider network-based deployment. In this method of deployment, the installation files are saved in a network directory on a server. Users can then access the directory and begin installing from the files on this server. The drawback to this is that if numerous users are installing simultaneously, then the network can become bogged down from the increased traffic. It is, however, an extremely useful method of deployment, especially in cases where a specific person is given the duty of installing applications for the user. When the network traffic is low, such as after business hours, the installation can take place on the network with no disruptions.
Intranets and the Internet is another common method of deployment, and similar to network-based deployment. In this method of deployment, the installation files are put on a Web server that users have access. They then download and/or install the files from the directory on that Web server. As is the case with network-based deployment, this can slow down an intranet. While this isn’t an issue with Internet deployment, it is an issue for corporate intranets. Therefore, you may wish to limit the times or number of connections to access these files.
It’s often wise to plan on setting up a test network of ten or so computers to test your deployment strategy before actually implementing it. This will allow you to determine problems that users may incur when trying to obtain and setup your solution on their computers. While this doesn’t become relevant until after the application has been completed, and ready for deployment, it is something that’s worth planning for. It may help to identify and aid in solving problems before your users actually experience them.
Once usage scenarios have been created, or when you’re ready for feedback on your design of a user interface, it’s important to validate your design with the user. This means getting user feedback on your design, and ensuring that it suits their needs and expectations. In validating your conceptual design, you gather together the users you’ve obtained information from and solicit their opinions and input.
A common method of doing this is walking the user through the usage scenarios you’ve created. At points along the way, you can ask for their input directly, or invite them to jump in and comment at any point. Generally, you’ll get considerable feedback from the user, which allows you to determine whether you’re on the right track.
As we’ll see in great detail in Chapter 11, prototypes of the user interface are another method of validating your design. This entails creating a mock-up of the user interface, and inviting feedback from users. Using this method allows you to catch problems in your interface design early in the development process.
Once you’ve gotten the input you need from users, you then go back and refine or redesign the work you’ve done so far. While this seems like a lot of work, it is considerably easier to correct problems early in the design process, than to wait and have to refine or redesign your application once program code has been added. The rule here is to try and get it as close to perfect early in the game.
Here are some scenario questions you may encounter, and their answers.
Which is the best language to use in creating an application? |
The choice of language for development depends on the inherent strengths and weaknesses of the language, and the specifics of the project you’re working on. There is no overall “best” language. The programming language should be chosen on a project-to-project basis. |
Based on the number of users, which type of database should I use for my application? |
Visual FoxPro, Access, and Jet databases are useful for smaller amounts of users. If you expect a large number of users, in the high hundreds or even thousands, SQL Server should be considered. |
Why is it important to validate the conceptual design? |
It ensures that your vision of the product matches the customer’s and endusers’ vision. This guarantees you’re on the right track with your design, and helps to keep design problems from cropping up in later design stages. |
PRECONDITIONS:
Database of customers and videos exist.
Clerk has access to database.
Customer has selected a video to rent.
· If customer wants video insurance, clerk adds $0.25 fee to account.
· If customer doesn’t want video insurance, it’s recorded and no fee is added.
· Cash
· Credit
POSTCONDITIONS:
Receipt exists showing proof of rental.
1. Collaborative solution type
2. Enterprise solution type
3. Centralized solution type
4. Distributed solution type
Conceptual design is the first step in designing your solution. Here, the activities performed by the users are identified and analyzed, as are tasks that must be performed to solve a business problem. Through the conceptual design process, you lay the groundwork for your solution to deal with these issues. This process is made up of several tasks, which include identifying users and their roles, gathering input from users, and validation of the design.
Because the design of the solution is driven by the requirements of the business, it’s important to gain a business perspective for your solution. It’s also important to gain a user perspective, and understand the requirements of the person who will actually work with the solution.
Through information gathered on the requirements of the business, customers, and endusers, you are able to design a solution that meets these needs. Usage scenarios can be helpful for this purpose, showing how tasks will be performed by the end user. From these scenarios, you can build a conceptual design based on the scenarios, and have a design that maps to the needs of the organization and end user.
In defining the technical architecture of a solution, you determine models and technologies that can be used for your solution’s design. Models, such as the OSI model, can be used to aid in understanding how your solution will communicate with applications on other computers. In addition, you will need to identify whether Internet technologies are required, and determine the language used to develop the solution. Each programming language and platform has inherent strengths and weaknesses. It’s important to look at how these will specifically relate to the project, and determine which is the best to use.
![]() | Conceptual Design is the first part of the Application model. It is here that you look at activities performed by the users, tasks that must be performed to solve a business problem, and then lay the groundwork for your solution to deal with these issues. |
![]() | Conceptual design is where you determine what users actually need to do, and forge the basic concept of what your application will become. |
![]() | An application created for an enterprise is called a “business solution” because it’s built to solve specific business problems. |
![]() | Design flaws that are instituted in the conceptual phase can produce multiple logical or physical instances of application design or functionality that either do not address a business need or ignore user requirements. |
![]() | In gathering user requirements, the business and the units within it become independent entities. The organization, and its departments and units, are seen as individual users with their own needs and perspectives. |
![]() | Because organizations and the units that make up the business are viewed as individual entities, it is important to create user profiles and gather information from them as well. Each department and unit, in addition to the business as a whole, while have its own goals, needs, and interests. |
![]() | Once you’ve identified who you’re going to acquire information from, you’re ready to gather that information. |
![]() | Once you’ve determined the needs of end users, the business entity, and its departments, you need to combine the different perspectives into a single concept. This entails sifting through the information you’ve gathered to determine what will be included in the design. |
![]() | By going through the user profiles and gathered information, you see what common issues appeared, what end-user requirements conflict with the interests of the business or its departments, and what great ideas appeared that need to be incorporated into your application’s design. |
![]() | A scenario is an outline or plot of a sequence of events. As an outline, it provides a point-by-point, step-by-step analysis that depicts the procedure that occurs when something is done. |
![]() | Usage scenarios are used to construct a conceptual design for your application, which is one of the exam objectives. There are different methods of constructing such scenarios in the context of the business and users. While these are included below to help you construct conceptual designs, the task sequence model, workflow process model, and physical environment models aren’t directly addressed in the exam. |
![]() | The workflow process model is used to create usage scenarios that show how specific jobs are routed through an organization. |
![]() | In designing an application that’s geared to the needs of the user, you need to understand the tasks he or she does to complete an activity. It’s for this reason that the task sequence model is used to create usage scenarios. |
![]() | Usage scenarios are also valuable for understanding the physical environment that your application will be used in. |
![]() | The physical environment model looks at the environment in which an application will be used. |
![]() | From the usage scenarios, you gather the information that allows you to identify business objects, as well as the appropriate solution type for your organization. |
![]() | Single-tier solutions are common in desktop applications, where the user may not even have access to a network. |
![]() | Remember that with single-tier solutions everything is located on one machine. |
![]() | With two-tier solutions, the database doesn’t reside on the user’s computer. Instead, there is a database server that handles the management of data. |
![]() | n-Tier solutions are what developers commonly refer to when talking about Three-Tier solutions. |
![]() | While the technologies covered here are those directly covered on the exam, you should try to have as much familiarity with as many as possible. The exam expects that you know at least one programming language before taking the exam, and have some experience with developing solutions. |
![]() | If the current technology environment doesn’t support the requirements of the application, you have one of two options: upgrade the technical resources so they do support the application, or change your design so that it works with what’s already in place. |
![]() | Since no one wants a database application that doesn’t store data, choosing a data storage architecture is an important issue. |
![]() | Once you’ve determined what your technical architecture should be, it is time to test whether that architecture is feasible. |
![]() | On the exam you’ll be given a case study with proposed solutions, and a series of choices. This is not a memory test. You can refer to the case study as many times as you wish. |
![]() | The sooner you develop an appropriate deployment strategy, the better prepared you’ll be when the time comes to actually deploy your application. |
![]() | Once usage scenarios have been created, or when you’re ready for feedback on your design of a user interface, it’s important to validate your design with the user. |