BSc:SoftwareQualityandReliability.S21

From IU
Revision as of 13:57, 30 July 2021 by 10.90.136.11 (talk) (Created page with "= Software Quality and Reliability = * <span>'''Course name:'''</span> Software Quality and Reliability * <span>'''code:'''</span> 000000517 == Course characteristics == ==...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Software Quality and Reliability

  • Course name: Software Quality and Reliability
  • code: 000000517

Course characteristics

Key concepts of the class

  • Quality Models and Metrics
  • Verification Techniques and Testing
  • Adequacy Criteria
  • Reliability Engineering
  • Cost of quality

What is the purpose of this course?

Building high-quality software is utmost important. However, it is easier said than done. The course is an overview of software quality and software reliability engineering methods. It includes introduction to software quality, overview of static analysis methods, testing techniques and reliability engineering. The students will put in practice the methods during laboratory classes and will dig down the topics in a small realistic project. The course balances traditional lectures, laboratory class and a small course project in which students apply the concepts and methods they learn to real artifacts. The course project consists in a quality analysis of an open source project of student’s choice.

Course Objectives Based on Bloom’s Taxonomy

- What should a student remember at the end of the course?

By the end of the course, the students will remember:

  • Several views on software quality.
  • Trade-offs among quality attributes in quality models.
  • Major differences verification techniques.
  • Adequacy criteria for verification.
  • Definition of reliability and the ways to calculate the necessary reliability.
  • Cost of quality.

- What should a student be able to understand at the end of the course?

By the end of the course, the students should be able to describe and explain (with examples)

  • Quality models usage.
  • Cost of quality concept.
  • Strengths and weaknesses of specific verification techniques.
  • Reliability engineering.

- What should a student be able to apply at the end of the course?

By the end of the course, the students should be able to

  • Define a quality model of a software project in a given context.
  • Select appropriate verification techniques and justify their adequacy.
  • Define a necessary reliability for a software project in a given context.
  • Justify quality related decisions to different stakeholders based on the cost of quality concepts.

Course evaluation

Evaluation

Course grade breakdown
Proposed points
Labs/seminar classes 20 10
Interim performance assessment 30 50
Exams 50 40

The students performance will be evaluated as follows:

  • Mid-term exam (20%)
  • Final exam (20%)
  • Reading Questions (10%)
  • Project: mid-term presentation (20%)
  • Project: final report (20%)
  • Participation (10%)

Grades range

Course grading range
Proposed range
A. Excellent 90-100 80-100
B. Good 75-89 65-79
C. Satisfactory 60-74 50-64
D. Poor 0-59 0-49


If necessary, please indicate freely your course’s grading features: The semester starts with the default range as proposed in the Table 1, but it may change slightly (usually reduced) depending on how the semester progresses.

Resources and reference material

  • Elena Dubrova, "Fault Tolerant Design"
  • Laura L. Pullum, "Software Fault Tolerance Techniques and Implementation"
  • Heiko Koziolek, “Operational Profiles for Software Reliability”
  • G. D. Everett and R. McLeod, Jr, "Software Testing: Testing Across the Entire Software Development Life Cycle"
  • Daniel Galin. "Costs of Software Quality"
  • Stefan Wagner. "Software Quality Economics for Combining Defect-Detection Techniques"

Course Sections

The main sections of the course and approximate hour distribution between them is as follows:

Course Sections
Section Section Title Teaching Hours
1 Defining quality 8
2 Verification and Testing 10
3 Reliability 6

Section 1

Section title:

Defining quality

Topics covered in this section:

  • Introduction, Views on Quality
  • Quality Models
  • Measurements & Quality Metrics
  • Cost of quality

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 0
Homework and group projects & 1
Midterm evaluation & 1
Testing (written or computer based) & 0
Reports & 1
Essays & 1
Oral polls & 0
Discussions & 1


Typical questions for ongoing performance evaluation within this section

  1. What is the dominant quality view implicit in SCRUM and RUP?
  2. Explain in your own words and in no more than three sentences the main contribution of one of the quality gurus like Ishikawa?
  3. What is the difference between must have attributes and delighters in Kano’s concept?
  4. What is the main difference between a quality model like ISO 25010 and SAP Products Standard?
  5. Describe in your own words and with regards to ISO 25010 the following quality attributes: Security, Reliability and Maintainability.
  6. What is Kruchten’s definition and taxonomy of Technical Debt?
  7. According to Highsmith, what is relation of Technical Debt and Cost of Change?
  8. In McConnell’s taxonomy which type of Technical Debt can be positive?

Typical questions for seminar classes (labs) within this section

  1. Define major quality focus by the customer in a given project.
  2. Using SONAR evaluate maintainability of a given project.
  3. Discuss you interpretation of the obtained quality level in a given project.
  4. Describe how and what for quality models are useful? Provide an example from your studio project.
  5. Map the requirement “the system shall be easy to maintain” to the ISO 25010 Quality model. Provide a definition to the metric level for at least two sub-characteristics for the requirement, and represent the mapping graphically.
  6. Give an example of possible appraisal costs for a given project.
  7. Present the quality model for the practicum project.

Test questions for final assessment in this section

  1. Explain the difference between product quality, quality in use and process quality. Provide 2-3 quality attributes of each category briefly describing them.
  2. What quality view best encompasses the phrase "Quality consists of the extent to which a specimen [a product-brand-model-seller combination] possesses the service characteristics you desire".
  3. Explain the difference between accuracy and precision of measurement methods.
  4. For each of the following quantities, indicate the scale (nominal, ordinal, interval, or ratio) of the data (just the scale, no justification required): a. Categories of defect types in a bug database. b. Branch coverage of a test suite. c. Severity of the defects in a bug database. d. Statement coverage of a test suite. e. Number of features delivered on a milestone.
  5. Explain the different types of technical debt that a project might incur.
  6. Give a definition of constituent parts of the cost of quality.

Section 2

Section title:

Verification overview and Testing

Topics covered in this section:

  • Verification Overview
  • Measuring Tests Adequacy
  • Input Domain Testing
  • Random & Mutation Testing

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 0
Homework and group projects & 1
Midterm evaluation & 1
Testing (written or computer based) & 0
Reports & 1
Essays & 1
Oral polls & 0
Discussions & 1


Typical questions for ongoing performance evaluation within this section

  1. In the context of mutation testing: a. What is an equivalent mutant? b. What is the meaning of the terms killed and dead on arrival? c. What is the difference between the two?
  2. Develop BVA test cases for an application that implements the logic as defined in the exercise.
  3. What is the relation between branch coverage and mutation testing?
  4. What is an infeasible path?
  5. What is fuzz testing? How it is different from random testing?
  6. What is the oracle problem?

Typical questions for seminar classes (labs) within this section

  1. Write a short code snippet that contains a possible null-pointer exception, and two different sets of test cases that achieve full branch coverage for the snippet. The first set of test cases should miss the defect; the second should trigger it.
  2. Develop a classification tree covering all test relevant aspect for a Merge method. The method accepts two ordered integer vectors with a maximum of 128 elements each and returns a single ordered vector with no-duplicates formed from the elements of the input vectors.
  3. Develop test cases for the logical function (A & B) | C -> D so that it achieves 100% MC/DC.
  4. Develop test cases to achieve 100% basis path coverage utilizing McCabe method for the program below. Include: control flow graph, basis paths, test cases.

Test questions for final assessment in this section

  1. Identify equivalence classes using Decision Table Method for a given problem.
  2. Calculate number of test cases to achieve Basis Path coverage for a code sample.
  3. Provide a test set that achieves full Basis Path coverage for a code sample.
  4. Explain "dead on arrival" concept in the context of mutation testing.
  5. Give examples of several types of usage for the fuzz testing.

Section 3

Section title:

Reliability

Topics covered in this section:

  • Reliability Introduction and Necessary Reliability
  • System Reliability and Reliability Strategies
  • Operation Profile and Performance Testing

What forms of evaluation were used to test students’ performance in this section?

|a|c| & Yes/No
Development of individual parts of software product code & 0
Homework and group projects & 1
Midterm evaluation & 1
Testing (written or computer based) & 0
Reports & 1
Essays & 1
Oral polls & 0
Discussions & 1


Typical questions for ongoing performance evaluation within this section

  1. Assume that a software system will experience 150 failures in infinite time. The system has now experienced 60 failures so far. The initial failure intensity at the beginning of system testing was 15 failures per CPU hour. What is the current failure intensity?
  2. Explain in your words the check-point recovery mechanism. Assuming that several communicating processes run in parallel, what will be a limitation of the checkpoint recovery mechanism?
  3. In your words, define the Operational Profile.
  4. Explain impact on Utilization on Response Time.
  5. Explain Amdahl’s law relation to performance improvements during development process.

Typical questions for seminar classes (labs) within this section

  1. For the course project define the necessary reliability.
  2. Explain in your words the difference between N-version programming and N-version self-checking programming.
  3. Explain how the operational profile concept can be applied to regression and load tests.
  4. From a pure performance point of view is it better, the same or worst to have a single server or two with half the speed?
  5. You execute a benchmark test twice and find that the performance of the system was 30 transactions/hour the first time and 20 transactions/hour the second time. What is the average throughput?
  6. You execute a load test for one hour, first during peak hour and again off-peak. During peak hour the system process 20 transactions/hour. Off-peak it processes 30 transactions/hour. What is the average throughput?

Test questions for final assessment in this section

  1. Assume that a software system is undergoing system-level tests and the initial failure intensity is 15 failures per CPU hour. The failure intensity decay parameter has been found to be 0.025 per failure. So far the test engineers have observed 60 failures. What is the current failure intensity?
  2. Explain in your own words the role of the voter in N-version methods.
  3. Describe in your words limitations of Operational Profile.
  4. Give an example illustrating general relationships between response time, throughput, and resource utilization.