Jump to content

Counterexample-guided abstraction refinement

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Alvinz (talk | contribs) at 03:53, 27 December 2023 (Refine introduction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This sandbox is in the article namespace. Either move this page into your userspace, or remove the {{User sandbox}} template.

Counterexample-guided abstraction refinement (CEGAR) is a technique for symbolic model checking.[1][2] It is also applied in modal logic tableau calculi algorithms to optimise their efficiency.[3] In computer-aided verification and analysis of programs, models of computation often consists of states. The number of states that models even small programs, however, can be enormous — the state explosion problem emerges.[4] CEGAR addresses this problem with two stages — abstraction, which simplifies a model by grouping states, and refinement, which increases the precision of the abstraction that closer resembles the original model.

If a desired property for a program is not satisfied in the abstract model, a counterexample is generated. The CEGAR process then checks whether the counterexample is spurious, i.e., if the counterexample also applies for the actual program. If so, the process finds a bug in the program. Otherwise, the counterexample is due to inadequate precision of the abstraction. In this case, refinement is performed.[5] The iterative procedure terminates either if a bug is found or when the abstraction has been refined to the extent that it is equivalent to the original model.

References

  1. ^ Clarke, Edmund; Grumberg, Orna; Jha, Shomesh; Lu, Yuan; Veith, Helmut (1 September 2003). "Counterexample-guided abstraction refinement for symbolic model checking". Journal of the ACM. 50 (5): 752–794. doi:10.1145/876638.876643.
  2. ^ Clarke, Edmund; Grumberg, Orna; Jha, Shomesh; Lu, Yuan; Veith, Helmut (2000). Counterexample-Guided Abstraction Refinement. International Conference on Computer Aided Verification CAV 2000: Computer Aided Verification. Lecture Notes in Computer Science. Vol. 1855. Berlin, Heidelberg: Springer. pp. 154–169. doi:10.1007/10722167_15. ISBN 978-3-540-45047-4.
  3. ^ Goré, Rajeev; Kikkert, Cormac (6 September 2021). CEGAR-Tableaux: Improved Modal Satisfiability via Modal Clause-Learning and SAT. Automated Reasoning with Analytic Tableaux and Related Methods: 30th International Conference, TABLEAUX 2021, Birmingham, UK. Lecture Notes in Computer Science. Vol. 12842. Cham: Springer. pp. 74–91. doi:10.1007/978-3-030-86059-2_5. ISBN 978-3-030-86059-2.
  4. ^ Valmari, Antti (1998). The State Explosion Problem. Advanced Course on Petri Nets, ACPN 1996. Lecture Notes in Computer Science. Vol. 1491. Berlin, Heidelberg: Springer. pp. 429–528. doi:10.1007/978-3-642-35746-6_1. ISBN 978-3-540-49442-3. Retrieved 27 December 2023.
  5. ^ Clarke, Edmund; Klieber, William; Nováček, Miloš; Zuliani, Paolo (2011). Model Checking and the State Explosion Problem. LASER Summer School on Software Engineering: LASER 2011. Lecture Notes in Computer Science. Vol. 7682. p. 1–30. doi:10.1007/978-3-642-35746-6_1. ISBN 978-3-642-35746-6. Retrieved 27 December 2023.