MSc:AnalysisSoftwareArtifacts old

From IU
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Analysis of Software Artifacts

  • Course name: Analysis of Software Artifacts
  • Course number: SE-01
  • Area of instruction: Computer Science and Engineering

Administrative details

  • Faculty: Computer Science and Engineering
  • Year of instruction: 1st year of MSc
  • Semester of instruction: 2nd semester
  • No. of Credits: 5 ECTS
  • Total workload on average: 180 hours overall
  • Frontal lecture hours: 2 hours per week.
  • Frontal tutorial hours: 0 hours per week.
  • Lab hours: 2 hours per week.
  • Individual lab hours: 2 hours per week.
  • Frequency: weekly throughout the semester.
  • Grading mode: letters: A, B, C, D.

Course outline

Software quality is a key aspect of any IT solution whether a few hundred lines of code for a smart phone app or a few million lines of code for Enterprise Resource Planning software. The Analysis of Software Artifacts course provides techniques to develop confidence in the quality of the software being produced or acquired regardless of its size and domain. The course adopts the view that software quality is not only the absence of defects but that it encompasses all the characteristics that bear on the ability of the software to satisfy stated and implied needs. Software quality is then defined from different perspectives: product quality, quality in use and process quality through the use of specific quality models. The course systematically explores different quality attributes and the techniques most appropriate to verify them. Specific topics include software testing, static analysis and model checking, inspections, technical debt, cost of software quality, planning for quality, quantitative models and defect classifications. The course balances traditional lectures with small projects in which students apply the ideas they are learning to real artifacts. The final project consists on the preparation of a quality plan for the project.

Expected learning outcomes

  • Define and execute a testing plan
  • Understand and use quality models and metrics
  • Perform static analysis of the code
  • Define a quality plan
  • Understand and use verification techniques

Required background knowledge

Discrete math and statistics knowledge at the undergraduate is strongly recommended. Programming experience and familiarity with the Java language are mandatory for the completion of the assignments.

Prerequisite courses

Fall Term (MSD, Methods)

Detailed topics covered in the course

  • Views on Quality Models
  • Measurements & Quality Metrics
  • Technical Debt
  • Verification Overview
  • Measuring Test Adequacy
  • Black Box Testing
  • Modeling the Input Domain
  • Combinatorial Testing
  • Basis Path & Data Flow Testing
  • Random & Mutation Testing
  • Inspections
  • Quality Planning
  • Static Analysis
  • Model Checking
  • Performance Analysis & Verification
  • Maintainability Analysis & Verification
  • Security Analysis & Verification
  • Organizational Quality & Process Improvement

Textbook

Reference material

  • David A. Garvin, What Does "Product Quality" Really Mean?
  • Volker Kruger, Main Schools Of Tqm: "the big five"
  • Steve McConnell, Managing Technical Debt
  • Jean-Louis Letouzey, Michel Ilkiewicz, Managing Technical Debt with SQALEMethod
  • Stephen Chin, Erik Huddleston, Walter Bodwell, and Israel Gat, The Economics of Technical Debt
  • Douglas W. Hubbard, How to Measure Anything. Finding the Value of "Intangibles" in Business
  • SEI, Foundations of Measurement
  • Frank Buechner, Is 100% Code Coverage Enough?
  • Brian Marick, How to Misuse Code Coverage
  • Ben H. Smith, Laurie Williams, Should software testers use mutation analysis to augment a test set?
  • Frank Buechner, Test Case Design Using the Classification Tree Method
  • Thomas J. McCabe, Arthur H. Watson, Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric
  • Richard Hamlet, Random Testing
  • Matt Warnock, Look out! It’s the fuzz!
  • Karl E. Wiegers, Peer Reviews in Software: A Practical Guide
  • DoD, Formal Inspections
  • Jason Cohen, Questions for a Review Process
  • Cohen, Prepare to Succeed - A Guide to Effective Code Review
  • Michael A. Howard A Process for Performing Security Code Reviews
  • Stephan Wagner, Software Quality Economics for Combining Defect-Detection Techniques, 2005
  • Ashok Shenvi, Defect Prevention with Orthogonal Defect Classification, 2009
  • Brian Chess, Jacob West, Secure Programming with Static Analysis, 2007
  • Mordechai Ben-Ari, A Primer on Model Checking, 2010
  • Dave Jewell, Performance Engineering and Management Method, 2008
  • Jane Hillston, Performance Modeling, Operational Laws
  • Craig Shallahamer, Forecasting Oracle Performance (Ch. 5, Practical Queuing Theory), 2007
  • Gerald Everett, Performance Testing, Chapter 9
  • IEEE Guide for Software Verification and Validation Plans, 1993
  • Rick D. Craig, Systematic Software Testing, 2002
  • Peter Mell, A Complete Guide to the Common Vulnerability Scoring System, 2007
  • NIST, Technical Guide to Information Security Testing and Assessment, 2008
  • Boris Mutafelija, Systematic Process Improvement Using ISO 9001:2000 and CMMI, 2003
  • Edward F. Weller, Practical Applications of Statistical Process Control, 2000
  • Larry Webber, Michael Wallace, Quality Control for Dummies, 2007
  • Mahesh S. Raisinghani, Six Sigma: concepts, tools, and applications, 2005
  • SEI, Practical Software Measurement: Measuring for Process Management and Improvement, 1997

Required computer resources

Laptop

Evaluation

  • Mid-term exam (20%)
  • Final exam (20%)
  • Quality plan (10%)
  • Group projects (20%)
  • Individual Assignments (20%)
  • Participation (10%)