Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning to Guide Automated Reasoning: A GNN-Based Framework
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.ORCID iD: 0000-0002-4926-8089
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Description
Abstract [en]

Symbolic methods play a crucial role in reasoning tasks such as program verification, theorem proving, and constraint solving. These methods rely on heuristic-driven decision processes to efficiently explore large or infinite state spaces. Traditional heuristics, while effective, are manually designed and often struggle with generalization across different problem domains. This dissertation introduces a deep learning-based framework to replace or augment heuristic decision-making in symbolic methods, enabling adaptive and data-driven guidance.

The framework leverages Graph Neural Networks (GNNs) to learn structural patterns from symbolic expressions, formulating decision problems as classification or ranking tasks. We propose novel graph representations for Constrained Horn Clauses (CHCs) and word equations, capturing both syntactic and semantic properties to facilitate effective learning. Various GNN architectures, including Graph Convolutional Networks (GCNs) and Relational Hypergraph Neural Networks (R-HyGNNs), are evaluated for their suitability in different symbolic reasoning tasks.

We implement the framework in a CHC solver and a word equation solver, demonstrating that learning-based heuristics can improve solver efficiency by guiding key decision processes such as clause selection and branch selection. The dissertation also explores different strategies for integrating trained models into solvers, balancing computational overhead with performance gains through caching, hybrid heuristics, and selective model querying.

Experimental results show that our framework consistently enhances solver performance, particularly in challenging problem domains. The findings suggest that deep learning can significantly improve symbolic reasoning by learning adaptive heuristics, paving the way for further integration of machine learning in formal methods.

Future research directions include extending the word equation solver, optimizing GNN architectures, and expanding training data sources.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2025. , p. 62
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2510
Keywords [en]
Graph Neural Network, CHC Solver, Word Equation
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:uu:diva-551906ISBN: 978-91-513-2414-2 (print)OAI: oai:DiVA.org:uu-551906DiVA, id: diva2:1944055
Public defence
2025-05-06, Häggsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, 13:15 (English)
Opponent
Supervisors
Available from: 2025-04-08 Created: 2025-03-12 Last updated: 2025-04-08
List of papers
1. Exploring Representation of Horn clauses using GNNs
Open this publication in new window or tab >>Exploring Representation of Horn clauses using GNNs
2022 (English)In: PAAR’22: 8th Workshop on Practical Aspects of Automated Reasoning / [ed] Boris Konev; Claudia Schon; Alexander Steen, Central Europe, 2022Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, the application of machine learning in program verification, and the embedding of programs to capture semantic information, has been recognised as an important tool by many research groups. Learning program semantics from raw source code is challenging due to the complexity of real-world programming language syntax and due to the difficulty of reconstructing long-distance relational information implicitly represented in programs using identifiers. Addressing the first point, we consider Constrained Horn Clauses (CHCs) as a standard representation of program verification problems, providing a simple and programming language-independent syntax. For the second challenge, we explore graph representations of CHCs, and propose a new Relational Hypergraph Neural Network (R-HyGNN) architecture to learn program features.

We introduce two different graph representations of CHCs. One is called constraint graph (CG), and emphasizes syntactic information of CHCs by translating the symbols and their relations in CHCs as typed nodes and binary edges, respectively, and constructing the constraintsas abstract syntax trees. The second one is called control- and data-flow hypergraph (CDHG), and emphasizes semantic information of CHCs by representing the control and data flow through ternary hyperedges.

We then propose a new GNN architecture, R-HyGNN, extending Relational Graph Convolutional Networks, to handle hypergraphs. To evaluate the ability of R-HyGNN to extract semantic information from programs, we use R-HyGNNs to train models on the two graph representations, and on five proxy tasks with increasing difficulty, using benchmarks from CHC-COMP 2021 as training data. The most difficult proxy task requires the model to predict the occurrence of clauses in counter-examples, which subsumes satisfiability of CHCs. CDHG achieves 90.59% accuracy in this task. Furthermore, R-HyGNN has perfect predictions on one of the graphs consisting of more than 290 clauses. Overall, our experiments indicate that R-HyGNN can capture intricate program features for guiding verification problems.

Place, publisher, year, edition, pages
Central Europe, 2022
Series
CEUR workshop proceedings, ISSN 1613-0073 ; 3201
Keywords
Constraint Horn clauses, Graph Neural Networks, Automatic program verification
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-509886 (URN)
Conference
8th Workshop on Practical Aspects of Automated Reasoning, Haifa, Israel, August 11-12, 2022
Available from: 2023-08-23 Created: 2023-08-23 Last updated: 2025-03-12Bibliographically approved
2. Boosting Constrained Horn Solving by Unsat Core Learning
Open this publication in new window or tab >>Boosting Constrained Horn Solving by Unsat Core Learning
2024 (English)In: Verification, Model Checking, and Abstract Interpretation / [ed] Rayna Dimitrova; Ori Lahav; Sebastian Wolff, Springer Nature, 2024, p. 280-302Conference paper, Published paper (Refereed)
Abstract [en]

The Relational Hyper-Graph Neural Network (R-HyGNN) was introduced in [1] to learn domain-specific knowledge from program verification problems encoded in Constrained Horn Clauses (CHCs). It exhibits high accuracy in predicting the occurrence of CHCs in counterexamples. In this research, we present an R-HyGNN-based framework called MUSHyperNet. The goal is to predict the Minimal Unsatisfiable Subsets (MUSes) (i.e., unsat core) of a set of CHCs to guide an abstract symbolic model checking algorithm. In MUSHyperNet, we can predict the MUSes once and use them in different instances of the abstract symbolic model checking algorithm. We demonstrate the efficacy of MUSHyperNet using two instances of the abstract symbolic modelchecking algorithm: Counter-Example Guided Abstraction Refinement (CEGAR) and symbolic model-checking-based (SymEx) algorithms. Our framework enhances performance on a uniform selection of benchmarks across all categories from CHC-COMP, solving more problems (6.1% increase for SymEx, 4.1% for CEGAR) and reducing average solving time (13.3% for SymEx, 7.1% for CEGAR).

Place, publisher, year, edition, pages
Springer Nature, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14499
Keywords
Automatic program verification, Constrained Horn clauses, Graph Neural Networks
National Category
Computer Sciences Computer Systems
Identifiers
urn:nbn:se:uu:diva-524971 (URN)10.1007/978-3-031-50524-9_13 (DOI)001166801100013 ()978-3-031-50523-2 (ISBN)978-3-031-50524-9 (ISBN)
Conference
25th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), January 15-16, 2024, Inst Engn & Technol, London, England
Funder
Swedish Research Council, 2018-05973Swedish Foundation for Strategic Research, RIT17-0011Swedish National Infrastructure for Computing (SNIC)UPPMAXWallenberg Foundations, UPDATE
Available from: 2024-03-15 Created: 2024-03-15 Last updated: 2025-03-12Bibliographically approved
3. Guiding Word Equation Solving using Graph Neural Networks
Open this publication in new window or tab >>Guiding Word Equation Solving using Graph Neural Networks
Show others...
2024 (English)In: Automated Technology for Verification and Analysi: 22nd International Symposium on Automated Technology for Verification and Analysis (ATVA 2024), Kyoto, Japan, October 21–25, 2024 / [ed] S. Akshay; Aina Niemetz; Sriram Sankaranarayanan, Cham: Springer, 2024, p. 279-301Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a Graph Neural Network-guided algorithm for solving word equations, based on the well-known Nielsen transformation for splitting equations.The algorithm iteratively rewrites the first terms of each side of an equation, giving rise to a tree-like search space.The choice of path at each split point of the tree significantly impacts solving time, motivating the use of Graph Neural Networks (GNNs) for efficient split decision-making.Split decisions are encoded as multi-classification tasks, and five graph representations of word equations are introduced to encode their structural information for GNNs.The algorithm is implemented as a solver named DragonLi.Experiments are conducted  on artificial and real-world benchmarks. The algorithm performs particularly well on satisfiable problems. For single word equations, DragonLi can solve significantly more problems than well-established string solvers.For the conjunction of multiple word equations, DragonLi is competitive with state-of-the-art string solvers.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15054
Keywords
Word equation, Graph neural network, String theory
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-549607 (URN)10.1007/978-3-031-78709-6_14 (DOI)978-3-031-78708-9 (ISBN)978-3-031-78709-6 (ISBN)
Conference
22nd International Symposium on Automated Technology for Verification and Analysis
Funder
Swedish Research Council, 022-06725Swedish Research Council, 2021-06327Swedish Foundation for Strategic Research, RIT17-0011Knut and Alice Wallenberg Foundation
Available from: 2025-02-05 Created: 2025-02-05 Last updated: 2025-03-12Bibliographically approved
4. When GNNs Met a Word Equations Solver: Learning to Rank Equations
Open this publication in new window or tab >>When GNNs Met a Word Equations Solver: Learning to Rank Equations
Show others...
2025 (English)Manuscript (preprint) (Other academic)
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-549609 (URN)
Available from: 2025-03-02 Created: 2025-03-02 Last updated: 2025-03-18

Open Access in DiVA

UUThesis_C-Liang-2025(834 kB)87 downloads
File information
File name FULLTEXT02.pdfFile size 834 kBChecksum SHA-512
ca010fe168c1dd9d28893231d47a99a55de6420a6fb9ebe785f447900ab45f2c7de424654fdf92bab896c723bc25154b98541b5c543e3fcb36d9f2581cfd9e72
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Liang, Chencheng
By organisation
Computer Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 87 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1238 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf