Saturday, 29 December 2012

NGO Management SYstem-3




 


                                                                                                           









 Scheduling is the process of deciding how to commit resources between a variety of possible tasks. Time can be specified  or floating as part of a sequence of events.
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used as an alternative to first-come first-served queuing of data packets.
The simplest best-effort scheduling algorithms are round-robin, fair queuing (a max-min fair scheduling algorithm), proportionally fair scheduling and maximum throughput. If differentiated or guaranteed quality of service is offered, as opposed to best-effort communication, weighted fair queuing may be utilized.
In advanced packet radio wireless networks such as HSDPA (High-Speed Downlink Packet Access ) 3.5G cellular system, channel-dependent scheduling may be used to take advantage of channel state information. If the channel conditions are favorable, the throughput and system spectral efficiency may be increased. In even more advanced systems such asLTE, the scheduling is combined by channel-dependent packet-by-packet dynamic channel allocation, or by assigning OFDMA multi-carriers or other frequency-domain equalization components to the users that best can utilize them.



Horizontal Scroll: GANTT CHART 


A Gantt chart is a type of bar chart that illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of a project. Terminal elements and summary elements comprise the work breakdown structure of the project. Some Gantt charts also show the dependency(i.e., precedence network) relationships between activities. Gantt charts can be used to show current schedule status.
It is a type of bar chart that illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of a project. Terminal elements and summary elements comprise the work breakdown structure of the project. Some Gantt charts also show the dependency (i.e., precedence network) relationships between activities. Gantt charts can be used to show current schedule status


 A PERT chart is a project management tool used to schedule, organize, and coordinate tasks within a project. PERT stands for Program Evaluation Review Technique.
The PERT chart is sometimes preferred over the Gantt chart, another popular project management charting method, because it clearly illustrates task dependencies. On the other hand, the PERT chart can be much more difficult to interpret, especially on complex projects. Frequently, project managers use both techniques.
  • A diagram which uses the symbols and notations of program evaluation and review technique to depict the flow of dependent tasks and other events in a project

  • A project management technique for determining how much time a project needs before it can be completed. Each activity is assigned a best, worst and most probably completion time estimate. These estimates are then used to determine the average completion time.

  • A chart that displays the sequence of tasks that form the project schedule. PERT charts are particularly useful for highlighting a project's critical path.

  • Stands for Programme Evaluation and Review Technique. A project plan where activities are shown as boxes and dependant links between activities are shown as arrows. Also called an Arrow diagram or Network diagram. Originated in the US Navy in the 1950's.       

Risk management is the identification, assessment, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events[1] or to maximize the realization of opportunities. Risks can come from uncertainty in financial markets, project failures, legal liabilities, credit risk, accidents, natural causes and disasters as well as deliberate attacks from an adversary. Several risk management standards have been developed including the Project Management Institute, the National Institute of Science and Technology, actuarial societies, and ISO standards.[2][3] Methods, definitions and goals vary widely according to whether the risk management method is in the context of project management, security, engineering, industrial processes, financial portfolios, actuarial assessments, or public health and safety.
The strategies to manage risk include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk.
Certain aspects of many of the risk management standards have come under criticism for having no measurable improvement on risk even though the confidence in estimates and decisions increase.

Principles of risk management
The International Organization for Standardization (ISO) identifies the following principles of risk management:
Risk management should:
§  create value
§  be an integral part of organizational processes
§  be part of decision making
§  explicitly address uncertainty
§  be systematic and structured
§  be based on the best available information
§  be tailored
§  take into account human factors
§  be transparent and inclusive
§  be dynamic, iterative and responsive to change
§  be capable of continual improvement and enhancement

The process of risk management consists of several steps as follows:

Ø  Establishing the context
Establishing the context involves:
1.   Identification of risk in a selected domain of interest
2.   Planning the remainder of the process.
3.   Mapping out the following:
§  the social scope of risk management
§  the identity and objectives of stakeholders
§  the basis upon which risks will be evaluated, constraints.
4.   Defining a framework for the activity and an agenda for identification.
5.   Developing an analysis of risks involved in the process.
6.   Mitigation or Solution of risks using available technological, human and organizational resources.

Ø Identification
After establishing the context, the next step in the process of managing risk is to identify potential risks.
§  Source analysisRisk sources may be internal or external to the system that is the target of risk management.
Examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport.
The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are:
§  Objectives-based risk identification Organizations and project teams have objectives. Any event that may endanger achieving an objective partly or completely is identified as risk.
§  Scenario-based risk identification In scenario analysis different scenarios are created. The scenarios may be the alternative ways to achieve an objective, or an analysis of the interaction of forces in, for example, a market or battle. Any event that triggers an undesired scenario alternative is identified as risk - see Futures Studies for methodology used by Futurists.
§  Taxonomy-based risk identification The taxonomy in taxonomy-based risk identification is a breakdown of possible risk sources. Based on the taxonomy and knowledge of best practices, a questionnaire is compiled. The answers to the questions reveal risks.
§  Common-risk checking in several industries, lists with known risks are available. Each risk in the list can be checked for application to a particular situation.
§  Risk charting this method combines the above approaches by listing resources at risk, Threats to those resources Modifying Factors which may increase or decrease the risk and Consequences it is wished to avoid. Creating a matrix under these headings enables a variety of approaches. One can begin with resources and consider the threats they are exposed to and the consequences of each. Alternatively one can start with the threats and examine which resources they would affect, or one can begin with the consequences and determine which combination of threats and resources would be involved to bring them about.
Ø Assessment
Once risks have been identified, they must then be assessed as to their potential severity of loss and to the probability of occurrence. These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of the probability of an unlikely event occurring. Therefore, in the assessment process it is critical to make the best educated guesses possible in order to properly prioritize the implementation of the risk management plan.
The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents. Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for immaterial assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information. Nevertheless, risk assessment should produce such information for the management of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is:
Rate of occurrence multiplied by the impact of the event equals risk
Categories of risks:

Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:
  • Wrong time estimation
  •  Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
  •  Failure to identify complex functionalities and time required to develop those functionalities.
  •  Unexpected project scope expansions.
Budget Risk:
  •  Wrong budget estimation.
  •  Cost overruns
  •  Project scope expansion
Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:
  •  Failure to address priority conflicts
  •  Failure to resolve the responsibilities
  •  Insufficient resources
  •  No proper subject training
  •  No resource planning
  •  No communication in team.
Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
  •  Continuous changing requirements
  •  No advanced technology available or the existing technology is in initial stages.
  •  Product is complex to implement.
  •  Difficult project modules integration.
Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:
  •   Running out of fund.
  •   Market development
  •   Changing customer product strategy and priority
  •   Government rule changes.


                                                                                       
Horizontal Scroll: SYSTEM DESIGN 

Systems design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specifiedrequirements. One could see it as the application of systems theory to product development. There is some overlap with the disciplines ofsystems analysis, systems architecture and systems engineering. If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development, then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developingsystems to satisfy specified requirements of the user.
Object-oriented analysis and design methods are becoming the most widely used methods for computer systems design. TheUML has become the standard language in object-oriented analysis and design. It is widely used for modeling software systems and is increasingly used for high designing non-software systems and organizations.
Characteristics of a System Design
1. Organization: It says the Structure or order of built.
2. Interaction: Procedure in which the components interact.
3. Interdependence.
4. Integration
5. Central Objective
Elements of System Analysis
There are 4 basic elements of System analysis, they are
1. Outputs

2. Inputs : The essential elements of Inputs are
http://www.laynetworks.com/images/bullet.gif Accuracy of data
http://www.laynetworks.com/images/bullet.gif Timeliness
http://www.laynetworks.com/images/bullet.gif Proper format
http://www.laynetworks.com/images/bullet.gif Economy.

3. Files
4. Process
Types of Systems
1. Physical or abstract systems
2. Open or closed systems
3. Deterministic or probabilistic
4. Man-made systems
http://www.laynetworks.com/images/bullet.gif Formal systems – Organization representation
http://www.laynetworks.com/images/bullet.gif Informal systems – Employee based system
http://www.laynetworks.com/images/bullet.gif Computer based information systems – Computer handling business applications. These are collectively known as Computer Based Information systems (CBIS).
a. Transaction Processing System (TPS)
b. Management Information System (MIS)
c. Decision Support System (DSS)
d. Office Automation System (OAS)

Physical or abstract systems:
Physical systems:
Physical systems are tangible entities that may be touched and have physical existence. The elements (or components) of a physical system interface enable the 'flow' of information, matter and energy.
Abstract systems:
These are conceptual or non-physical entities that may not be touched but can be expressed as relationships and formulas.for eg, traffic system models and computer programs are both types of modeled systems. They can be the product of identification, design or invention. 

Open or closed systems:
Open systems
Open systems continuously interact with its environment. It receives inputs from and delivers output to the outside. Open systems can adopt the changing demand of the user. It has a discrete number of interfaces to allow the exchange of matter, energy or information with its surrounding environment.
Close systems
This isolated from environmental influences. In reality completely closed system are rare. A closed system is self-contained in such a way that outside events have no influence upon the system. In this case there is no possible exchange of matter, energy or information with the surrounding environment. After a period of time all internal activity within a closed system will stop.

Deterministic or probabilistic system:
Deterministic System
A deterministic is one in which the occurrence of all events is perfectly predictable.

Probabilistic systems
It is on in which the occurrence of events of cannot be perfectly predicted.

Man-made systems: 
These systems are designed for interaction between the user’s and the system . Some of the main made systems are:
a. Transaction Processing System (TPS)
b. Management Information System (MIS)
c. Decision Support System (DSS)
d. Office Automation System (OAS)

Transaction Processing System (TPS):  A TPS can be defined as a computer based system that stores, maintain updates, classifies and retrieved transaction data for record keeping. TPS are designed to improve the routine business activities on which all organizations depends. The most successful organizations perform transaction processing in a systematic way. TPS provide speed and accuracy to the business activities.
Management Information System (MIS): MIS are more concerned with management function, it can be described provide all levels of management with information essential to the running smooth. This information must be accurate, timely, complete and economically, feasible.
Decision Support System (DSS): DSS assist managers who must make decision that are not highly structured often called Unstructured if there are No clear procedures or making factors to be considered in the decision can be identified in advance. Judgment of the manager placed an important role in decision making. The DSS supports judgment of managers but does not replace it.
Office Automation System (OAS): These are among the newest the most rapidly expanding computer raised information systems. They will increase the productivity of office workers, typists, secretary, managers, etc.
An OAS is a multi-function computer based system that allows many office activities to be performed in an electronic mode .I t will reduce the paper work in an office.









Horizontal Scroll: PROGRAM STRUCTURE
 


Program structure the overall form of a program, with particular emphasis on the individual components of the program and the interrelationships between these components. Programs are frequently referred to as either well structuredor poorly structured. With a well-structured program the division into components follows some recognized principle such as information hiding, and the interfaces between components are explicit and simple

According to the flow chart we will implement the program structure
1). Firstly NGO will Register them self for the use of website
Then                                      Apply scheme                  Make Payments
After payment NGO will application id , Date, scheme name, ministry name
After this ngo will be shortlisted by application id

2).If a  NGO  wants to apply a other schemes then
they must have to check the expiry date whether it is available or not
If  available  then                                        select scheme and then                       apply for that.

3). If a NGO don’t know any scheme then he can go to the site and apply for the Schemes then                               Download the schemes














Horizontal Scroll: DATA INTEGRITY AND CONSTRAINTS 


A constraint is a property assigned to a column or the setof columns in a table that prevents certain types of inconsistent data values from being placed in the column(s). Constraints are used to enforce the data integrity. This ensures the accuracy and reliability of the data in the database. The following categories of the data integrity exist:

Ø Entity Integrity
Ø Domain Integrity
Ø Referential integrity
Ø User-Defined Integrity

Entity Integrity ensures that there are no duplicate rows in a table.

Domain Integrity enforces valid entries for a given columnby restricting the type, the format, or the range of possible values.

Referential integrity ensures that rows cannot be deleted, which are used by other records (for example, corresponding data values between tables will be vital).

 User-Defined Integrity enforces some specific business rules that do not fall into entity, domain, or referential integrity categories.

Each of these categories of the data integrity can be enforced by the appropriate constraints. Microsoft SQL Server supports the following constraints:

Ø PRIMARY KEY
Ø UNIQUE
Ø FOREIGN KEY
Ø CHECK
Ø NOT NULL

A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. Theprimary key constraints are used to enforce entity integrity.


A UNIQUE constraint enforces the uniqueness of the valuesin a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity key integrity as the primary constraints.


A FOREIGN KEY constraint prevents any actions that would destroy link between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity.


A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity.


A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints.








Horizontal Scroll: USER INTERFACE DESIGN 


User interface design or user interface engineering is the design of computers, appliances, machines, mobile communication devices,software applications, and websites with the focus on the user's experience and interaction. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals—what is often called user-centered design. Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to it. Graphic design may be utilized to support its usability. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design, or industrial.
The following tips and techniques:
1.     Consistency, consistency, consistency. I believe the most important thing you can possibly do is ensure your user interface works consistently. If you can double-click on items in one list and have something happen, then you should be able to double-click on items in any other list and have the same sort of thing happen. Put your buttons in consistent places on all your windows, use the same wording in labels and messages, and use a consistent color scheme throughout. Consistency in your user interface enables your users to build an accurate mental model of the way it works, and accurate mental models lead to lower training and support costs.
2.     Set standards and stick to them. The only way you can ensure consistency within your application is to set user interface design standards, and then stick to them.  You should follow Agile Modeling (AM)’s Apply Modeling Standards practice in all aspects of software development, including user interface design.
3.     Be prepared to hold the line. When you are developing the user interface for your system you will discover that your stakeholders often have some unusual ideas as to how the user interface should be developed.  You should definitely listen to these ideas but you also need to make your stakeholders aware of your corporate UI standards and the need to conform to them. 
4.     Explain the rules. Your users need to know how to work with the application you built for them. When an application works consistently, it means you only have to explain the rules once. This is a lot easier than explaining in detail exactly how to use each feature in an application step-by-step.
5.     Navigation between major user interface items is important. If it is difficult to get from one screen to another, then your users will quickly become frustrated and give up. When the flow between screens matches the flow of the work the user is trying to accomplish, then your application will make sense to your users. Because different users work in different ways, your system needs to be flexible enough to support their various approaches. User interface-flow diagrams should optionally be developed to further your understanding of the flow of your user interface.
6.     Navigation within a screen is important. In Western societies, people read left to right and top to bottom. Because people are used to this, should you design screens that are also organized left to right and top to bottom when designing a user interface for people from this culture? You want to organize navigation between widgets on your screen in a manner users will find familiar to them.
7.     Word your messages and labels effectively. The text you display on your screens is a primary source of information for your users. If your text is worded poorly, then your interface will be perceived poorly by your users. Using full words and sentences, as opposed to abbreviations and codes, makes your text easier to understand.  Your messages should be worded positively, imply that the user is in control, and provide insight into how to use the application properly. For example, which message do you find more appealing “You have input the wrong information” or “An account number should be eight digits in length.” Furthermore, your messages should be worded consistently and displayed in a consistent place on the screen. Although the messages “The person’s first name must be input” and “An account number should be input” are separately worded well, together they are inconsistent. In light of the first message, a better wording of the second message would be “The account number must be input” to make the two messages consistent.
8.     Understand the UI widgets. You should use the right widget for the right task, helping to increase the consistency in your application and probably making it easier to build the application in the first place. The only way you can learn how to use widgets properly is to read and understand the user-interface standards and guidelines your organization has adopted.
9.     Look at other applications with a grain of salt. Unless you know another application has been verified to follow the user interface-standards and guidelines of your organization, don’t assume the application is doing things right. Although looking at the work of others to get ideas is always a good idea, until you know how to distinguish between good user interface design and bad user interface design, you must be careful. Too many developers make the mistake of imitating the user interface of poorly designed software.
10.                       Use color appropriately. Color should be used sparingly in your applications and, if you do use it, you must also use a secondary indicator. The problem is that some of your users may be color blind and if you are using color to highlight something on a screen, then you need to do something else to make it stand out if you want these people to notice it. You also want to use colors in your application consistently, so you have a common look and feel throughout your application.
11.                       Follow the contrast rule. If you are going to use color in your application, you need to ensure that your screens are still readable. The best way to do this is to follow the contrast rule: Use dark text on light backgrounds and light text on dark backgrounds. Reading blue text on a white background is easy, but reading blue text on a red background is difficult. The problem is not enough contrast exists between blue and red to make it easy to read, whereas there is a lot of contrast between blue and white.
12.                       Align fields effectively. When a screen has more than one editing field, you want to organize the fields in a way that is both visually appealing and efficient. I have always found the best way to do so is to left-justify edit fields: in other words, make the left-hand side of each edit field line up in a straight line, one over the other. The corresponding labels should be right-justified and placed immediately beside the field. This is a clean and efficient way to organize the fields on a screen.
13.                       Expect your users to make mistakes. How many times have you accidentally deleted some text in one of your files or deleted the file itself? Were you able to recover from these mistakes or were you forced to redo hours, or even days, of work? The reality is that to err is human, so you should design your user interface to recover from mistakes made by your users. 
14.                       Justify data appropriately. For columns of data, common practice is to right-justify integers, decimal align floating-point numbers, and to left-justify strings.
15.                       Your design should be intuitable. In other words, if your users don’t know how to use your software, they should be able to determine how to use it by makingeducated guesses. Even when the guesses are wrong, your system should provide reasonable results from which your users can readily understand and ideally learn.
16.                       Don’t create busy user interfaces. Crowded screens are difficult to understand and, hence, are difficult to use. Experimental results show that the overall density of the screen should not exceed 40 percent, whereas local density within groupings should not exceed 62 percent.
17.                       Group things effectively. Items that are logically connected should be grouped together on the screen to communicate they are connected, whereas items that have nothing to do with each other should be separated. You can use white space between collections of items to group them and/or you can put boxes around them to accomplish the same thing.
18.                       Take an evolutionary approach.  Techniques such as user interface prototyping and Agile Model Driven Development (AMDD) are critical to your success as a developer.
Horizontal Scroll: TESTING
 


Testing is basically a process to detect errors in the software product. Before going into the details of testing techniques one should know what errors are. In day-to-day life we say whenever something goes wrong there is an error. This definition is quite vast. When we apply this concept to software products then we say whenever there is difference between what is expected out of software and what is being achieved, there is an error.
Software testing also provides an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs.
Software testing can also be stated as the process of validating and verifying that a software program/application/product:
1.   meets the business and technical requirements that guided its design and development;
2.   works as expected; and
3.   can be implemented with the same characteristics.
Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted.
Need
For any company developing software, at some point pressure to reach the deadline in order to release the product on time will come into play. Additional pressure from project stakeholders, Quite often, planned time to test the software (e.g. ascertain its quality - QA) will become reduced so as not to impact the release date. From a pure business perspective, this can be seen as a positive step as the product is reaching the intended customers on time. Careful consideration should be taken though as to the overall impact of a customer finding a 'bug' in the released product. Maybe the bug is buried deep within a very obscure functional area of the software product, and as the impact only results in a typo within a seldom-used report, the level of impact is very low. In this case, the effect on the business for this software company would probably be insignificant. Software testing (synonymous with the term Quality Assurance) itself can have many different purposes (quality assurance, validation, performance etc). This is a key decision when planning the QA /software testing, as not testing enough or testing in the wrong areas will inevitably result in missed bugs. The aim should be first ascertaining 'why' we are going to test and not simply 'what' we are going to test.

Horizontal Scroll: TEST PLAN 



A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from Test Engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include one or more of the following:
§  Design Verification or Compliance test - to be performed during the development or approval stages of the product, typically on a small sample of units.
§  Manufacturing or Production test - to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control.
§  Acceptance or Commissioning test - to be performed at the time of delivery or installation of the product.
§  Service and Repair test - to be performed as required over the service life of the product.
§  Regression test - to be performed on an existing operational product, to verify that existing functionality didn't get broken when other aspects of the environment are changed (e.g., upgrading the platform on which an existing application runs).
Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: Test Coverage, Test Methods, and Test Responsibilities. These are also used in a formal test strategy.
Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test Coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap, but will not necessarily be exactly the same for all stages. For example, some requirements may be verified during Design Verification test, but not repeated during Acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access (see Design For Test).
Test methods in the test plan state how test coverage will be implemented. Test methods may be determined by standards, regulatory agencies, or contractual agreement, or may have to be created new. Test methods also specify test equipment to be used in the performance of the tests and establish pass/fail criteria. Test methods used to verify hardware design requirements can range from very simple steps, such as visual inspection, to elaborate test procedures that are documented separately.
Test responsibilities include what organizations will perform the test methods and at each stage of the product life. This allows test organizations to plan, acquire or develop test equipment and other resources necessary to implement the test methods for which they are responsible. Test responsibilities also includes, what data will be collected, and how that data will be stored and reported (often referred to as "deliverables"). One outcome of a successful test plan should be a record or report of the verification of all design specifications and requirements as agreed upon by all parties.






Horizontal Scroll: TEST FLOW INFORMATION 


Testing is a complete process. For testing we need two types of inputs. First is software configuration. It includes software requirement specification, design specifications and source code of program. Second is test configuration. It is basically test plan and procedure.
Software configuration is required so that the testers know what is to be expected and tested whereas test configuration is testing plan that is, the way how the testing will be conducted on the system. It specifies the test cases and their expected value. It also specifies if any tools for testing are to be used. Test cases are required to know what specific situations need to be tested. When tests are evaluated, test results are compared with actual results and if there is some error, then debugging is done to correct the error. Testing is a way to know about quality and reliability. Error rate that is the occurrence of errors is evaluated. This data can be used to predict the occurrence of errors in future.



testing-process

Horizontal Scroll: TESTING USED 



The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are: 

• Unit Test. 
• System Test 
• Integration Test 
• Functional Test 
• Performance Test 
• Beta Test 
• Acceptance Test
·         Regression Testing

Horizontal Scroll: Unit Test


The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.

System Test
Several modules constitute a project. If the project is long-term project, several developers write the modules. Once all the modules are integrated, several errors may arise. The testing done at this stage is called system test.


System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.


Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.
         Basic tests provide an evidence that the system can be installed, configured and be brought to an operational state
         Functionality tests provide comprehensive testing over the full range of the requirements, within the capabilities of the system
         Robustness tests determine how well the system recovers from various input errors and other failure situations
         Inter-operability tests determine whether the system can inter-operate with other third party products
         Performance tests measure the performance characteristics of the system, e.g., throughput and response time, under various conditions
         Scalability tests determine the scaling limits of the system, in terms of user scaling, geographic scaling, and resource scaling
         Stress tests put a system under stress in order to determine the limitations of a system and, when it fails, to determine the manner in which the failure occurs
         Load and Stability tests provide evidence that the system remains stable for a long period of time under full load
         Reliability tests measure the ability of the system to keep operating for a long time without developing failures
         Regression tests determine that the system remains stable as it cycles through the integration of other subsystems and through maintenance tasks
         Documentation tests ensure that the system’s user guides are accurate and usable


Functional Test
Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer acceptance.

Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
Regression Testing
If a piece of Software is modified for any reason testing needs to be done to ensure that it works as specified and that it has not negatively impacted any functionality that it offered previously. This is known as Regression Testing.

Horizontal Scroll: VERIFICATION AND VALIDATION
 


Verification is a Quality control process that is used to evaluate whether or not a product, service, or system complies with regulations, specifications, or conditions imposed at the start of a development phase. Verification can be in development, scale-up, or production. This is often an internal process.
Validation is Quality assurance process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. This often involves acceptance of fitness for purpose with end users and other product stakeholders.
It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building the thing right?" "Building the right thing" refers back to the user's needs, while "building it right" checks that the specifications be correctly implemented by the system. In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance.




Validation
Verification
Am I building the right product
Am I building the product right
Determining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.
The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner.
Am I accessing the right data (in terms of the data required to satisfy the requirement)
Am I accessing the data right (in the right place; in the right way).
High level activity
Low level activity
Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment
Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards
Determination of correctness of the final software product by a development project with respect to the user needs and requirements
Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.



Test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not. The mechanism for determining whether a software program or system has passed or failed such a test is known as a test oracle. In some settings, an oracle could be a requirement or use case, while in others it could be a heuristic. It may take many test cases to determine that a software program or system is functioning correctly. Test cases are often referred to as test scripts, particularly when written. Written test cases are usually collected into test suites.

A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. You need to develop a test case for each test listed in the test plan. Figure 2.10 illustrates the point at which test case design occurs in the lab development and testing process.
Figure Designing Test Cases
Designing Test Cases

A test case includes:
  • The purpose of the test.
  • Special hardware requirements, such as a modem.
  • Special software requirements, such as a tool.
  • Specific setup or configuration requirements.
  • A description of how to perform the test.
  • The expected results or success criteria for the test.
Test cases should be written by a team member who understands the function or technology being tested, and each test case should be submitted for peer review.
Organizations take a variety of approaches to documenting test cases; these range from developing detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform the test and what data to use.
Most organizations prefer detailed test cases because determining pass or fail criteria is usually easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate than descriptive test cases. This is particularly important if you plan to compare the results of tests over time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and can require debugging, consuming time that would be better spent on testing.

Horizontal Scroll: TEST CASES DESIGN
 

Test Id
Objective
Descricption
Expected Result
Actual Result
Status
TC-001

TC-002

TC-003
TC-004

TC-005

TC-006

TC-007

TC-008

TC-009

TC-010


TC-011

TC-012
To check the functionality of link buttons

To Check caption of label password


To check whether text is password

To check the functionality of sign in button

To check the functionality of sign up button

To check the functionality of home menu

To check the back button


Check the functionality of post submit button

Check the print button function
To Check txtuser whether it accepts char and numeric both
To check the functionality of Discard Button
Txt passwd should display astric(*)
Run New user

Run login and check caption


Run the login


Run the sign in page


To run sign up page


Run the home page


Run the back home page


Run the post page


Print the current page

Type both char and numeric in txtuser


Discard the functioning

Type in the txt passwd
Open New user form
“Password”



Password<=8 chars

Move to sig in page

Move to new user page


“Home Page”


Back Word page


Next page



Print

Accepted


Error Msg
Only Astric(*)
Open New user form
“Password”



Password<=8 chars

Move to sig in page



Move to new user page


“Home Page”


Back Word page


Next page



Print

Accepted


Error Msg
Only Astric(*)
Pass

Pass

Pass
Pass

Pass

Pass

Pass
Pass
Pass

Pass


Pass
Pass


No comments:

Post a Comment