Jump to content

Open-source software assessment methodologies

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 109.78.215.181 (talk) at 21:54, 2 November 2013 (Comparison criteria). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Several methods have been created to define an assessment process for free/open-source software. Some focus on some aspects like the maturity, the durability and the strategy of the organisation around the open-source project itself. Other methodologies add functional aspects to the assessment process.

Existing methodologies

There are more than 20 different OSS evaluation methods.[1]

  • Open Source Maturity Model (OSMM) from Capgemini
  • Open Source Maturity Model (OSMM) from Navica
  • Open Source Maturity Model (OSSMM) by Woods and Guliani[2]
  • Methodology of Qualification and Selection of Open Source software (QSOS)
  • Open Business Readiness Rating (OpenBRR)
  • Open Business Quality Rating (OpenBQR)[3]
  • QualiPSo OpenSource Maturity Model (OMM)
  • QualiPSo Model for Open Source Software Trustworthiness (MOSST)[4]
  • QualOSS – Quality of Open Source
  • Evaluation Framework for Open Source Software[5]
  • A Quality Model for OSS Selection[6]
  • Atos Origin Method for Qualification and Selection of Open Source Software (QSOS)[7]
  • Observatory for Innovation and Technological transfer on Open Source software (OITOS)[8]
  • Framework for OS Critical Systems Evaluation (FOCSE)[9]

Comparison

Comparison criteria

Stol and Babar have proposed a comparison framework for OSS evaluation methods. The comparison presented below is based on the following criteria:

  • Seniority : the methodology birth date.
  • Original authors/sponsors : original methodology authors and sponsoring entity (if any)
  • License : Distribution and usage license for the methodology and the resulting assessments
  • Assessment model :
    • Detail levels : several levels of details or assessment granularity
    • Predefined criteria : the methodology provides some predefined criteria
    • Technical/functional criteria : the methodology permits the use of domain specific criteria based on technical information or features
  • Scoring model :
    • Scoring scale by criterion
    • Iterative process : the assessment can be performed and refined using several steps improving the level of details
    • Criteria weighting : it is possible to apply weighting on the assessed criteria as part of the methodology scoring model
  • Comparison : the comparison process is defined by the methodology

Comparison chart

Criteria OSMM Capgemini OSMM Navica QSOS OpenBRR OMM
Seniority 2003 2004 2004 2005 2008
Original authors/sponsors Capgemini Navicasoft Atos Origin Carnegie Mellon Silicon Valley, SpikeSource, O'Reilly, Intel QualiPSo project, EU commission
License Non-free license, but authorised distribution Assessment models licensed under the Academic Free License Methodology and assessments results licensed under the GNU Free Documentation License Assessments results licensed under a Creative Commons license Creative Commons Attribution-Share Alike 3.0 License
Assessment model Practical Practical Practical Scientific Scientific
Detail levels 2 axes on 2 levels 3 levels 3 levels or more (functional grids) 2 levels 3 levels
Predefined criteria Yes Yes Yes Yes Yes
Technical/functional criteria No No Yes Yes Yes
Scoring model Flexible Flexible Strict Flexible Flexible
Scoring scale by criterion 1 to 5 1 to 10 0 to 2 1 to 5 1 to 4
Iterative process No No Yes Yes Yes
Criteria weighting Yes Yes Yes Yes Yes
Comparison Yes No Yes No No

See also

References

  1. ^ Klaas-Jan Stol, Muhammad Ali Babar. "A Comparison Framework for Open Source Software Evaluation Methods" published in OSS 2010 proceedings. IFIP AICT vol. 319, pp. 389-394.
  2. ^ Woods, D., Guliani, G.: Open Source for the Enterprise: Managing Risks Reaping Rewards. O’Reilly Media, Inc., Sebastopol (2005)
  3. ^ Davide Taibi, Luigi Lavazza, Sandro Morasca. "OpenBQR: a framework for the assessment of OSS" published in OSS 2007 proceedings.
  4. ^ http://www.qualipso.org/mosst-champion
  5. ^ Koponen, T., Hotti, V.: Evaluation framework for open source software. In: Proc. Software Engineering and Practice (SERP), Las Vegas, Nevada, USA, June 21-24 (2004)
  6. ^ Sung, W.J., Kim, J.H., Rhew, S.Y.: A Quality Model for Open Source Software Selec- tion. In: Proc. Sixth International Conference on Advanced Language Processing and Web Information Technology, Luoyang, Henan, China, pp. 515–519 (2007)
  7. ^ Atos Origin: Method for Qualification and Selection of Open Source software (QSOS) version 1.6, Technical Report (2006)
  8. ^ Cabano, M., Monti, C., Piancastelli, G.: Context-Dependent * Evaluation Methodology for Open Source Software. In: Proc. Third IFIP WG 2.13 International Conference on Open Source Systems (OSS 2007), Limerick, Ireland, pp. 301–306 (2007)
  9. ^ Ardagna, C.A., Damiani, E., Frati, F.: FOCSE: An OWA-based Evaluation Framework for OS Adoption in Critical Environments. In: Proc. Third IFIP WG 2.13 International Conference on Open Source Systems, Limerick, Ireland, pp. 3–16 (2007)