Digitala Vetenskapliga Arkivet

Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An evidence-based neuro-symbolic framework for ambiguous image scene classification
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0009-0009-0072-7342
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0003-3128-191x
EluciDATA Lab, Sirris, Belgium.
2025 (English)In: Proceedings of Machine Learning Research / [ed] Gilpin L.H., Giunchiglia E., Hitzler P., van Krieken E., ML Research Press , 2025, Vol. 284Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we propose a novel neuro-symbolic approach to deal with the inherent ambiguity in image scene classification, combining the usage of pre-trained deep learning (DL) models with concepts from modal logic and evidence theory. The DL models are used to detect objects and estimate their depth in a set of labeled images. The obtained outputs are employed to form a dataset of instances characterizing the possible classes. Subsequently, a multi-valued mapping is defined between the data instances and the considered images resulting into each image being represented by the set of instances associated with it. The obtained mapping is utilized to infer necessity and possibility conditions of each class, or equivalently its upper (plausibility) and lower (belief) probabilities. Based on these interval evaluations, a rule-based and a score-based classifiers are built. The overall method is explainable and directly interpretable, robust to data scarcity and data imbalance. The presented framework is studied and evaluated on an abandoned bag detection use case. 

Place, publisher, year, edition, pages
ML Research Press , 2025. Vol. 284
Series
Proceedings of Machine Learning Research (PMLR), E-ISSN 2640-3498
Keywords [en]
Computer Vision, Evidence Theory, Image Scene Classification, Modal Logic, Multi-valued Mapping, Classification (of information), Computer circuits, Deep learning, Image classification, Mapping, Condition, Evidence theories, Evidence-based, Labeled images, Learning models, Logic theory, Multivalued mappings, Rule based
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:bth-28865Scopus ID: 2-s2.0-105020237323ISBN: 9781713845065 (print)OAI: oai:DiVA.org:bth-28865DiVA, id: diva2:2012151
Conference
19th Conference on Neurosymbolic Learning and Reasoning, NeSy 2025, Santa Cruz, Sept 8-10, 2025
Part of project
HINTS - Human-Centered Intelligent Realities
Funder
Knowledge Foundation, 20220068Available from: 2025-11-07 Created: 2025-11-07 Last updated: 2025-11-07Bibliographically approved

Open Access in DiVA

fulltext(1457 kB)22 downloads
File information
File name FULLTEXT01.pdfFile size 1457 kBChecksum SHA-512
42bb19b78392af2db53290c298d949301e9a1cb95851ee9842c1a62f15141e4bb3b7e700d128d2988a0bae6ca4d333ca76f8a5eb5b8832b6d19f16d37b27991f
Type fulltextMimetype application/pdf

Scopus

Search in DiVA

By author/editor
Murtas, GiuliaBoeva, Veselka
By organisation
Department of Computer Science
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1163 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf