Digitala Vetenskapliga Arkivet

Change search
Refine search result
3456789 251 - 300 of 3122
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computerized Cell Image Processing in Healthcare2005In: Proceedings of Healthcomm2005, 2005, p. 11-17Conference paper (Refereed)
    Abstract [en]

    The visual interpretation of images is at the core of most medical diagnostic procedures and the final decision for many diseases, including cancer, is based on microscopic examination of cells and tissues. Through screening of cell samples the incidence and mortality of cervical cancer have been reduced significantly. The visual interpretation is, however, tedious and in many cases error-prone. Therefore many attempts have been made at using the computer to supplement or replace the human visual inspection by computer analysis and to automate some of the more tedious visual screening tasks. The computers and computer networks have also been used to manage, store, transmit and display images of cells and tissues making it possible to visually analyze cells from remote locations. In this presentation these developments are traced from their very beginning through the present situation and into the future.

  • 252.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Recognizing signs of malignancy: The quest for computer assisted cancer screening and diagnosis systems2010In: International Conference on Computational Intelligence and Computing Research (ICCIC), 2010 IEEE, Coimbatore, India: IEEE Digital Library , 2010, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Almost all cancers are diagnosed through visual examination of microscopic tissue samples. Visual screening of cell samples, so called PAP-smears, has drastically reduced the incidence of cervical cancers in countries that have implemented population wide screening programs. But the visual examination is tedious, subjective and expensive. There has therefore been much research aiming for computer assisted or automated cell image analysis systems for cancer detection and diagnosis. Progress has been made but still most of cytology and pathology is done visually. In this presentation I will discuss some of the major issues involved, examine some of the proposed solutions and give some comments about the state of the art.

  • 253.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Curic, VladimirUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Nyström, IngelaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Strand, RobinUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wadelius, LenaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wernersson, ErikUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Centre for Image Analysis Annual Report 20092010Collection (editor) (Other academic)
  • 254.
    Bengtsson, Ewert
    et al.
    Uppsala University.
    Dahlqvist, Bengt
    Uppsala University.
    Eriksson, Olle
    Uppsala University.
    Nordin, Bo
    Uppsala University.
    Jarkrans, Torsten
    Uppsala University.
    Stenkvist, Björn
    Computer-assisted Scanning Microscopy in Cytology1982In: Proceedings of the IEEE International Symposium on Medical Imaging and Image Interpretation, 1982, p. 497-503Conference paper (Refereed)
  • 255.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Norell, KristinUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Nyström, IngelaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wadelius, LenaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wernersson, ErikUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Annual Report 20082009Collection (editor) (Other academic)
  • 256.
    Bengtsson, Ewert
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Rodenacker, Karsten
    A feature set for cytometry on digitized microscopic images2003In: Analytical Cellular Pathology, Vol. 24, no 1, p. 1-36Article in journal (Refereed)
  • 257.
    Bengtsson Ewert, Wählby Carolina, Lindblad Joakim
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Robust Cell Image Segmentation Methods2003Conference paper (Refereed)
  • 258.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Robust cell image segmentation methods2004In: Pattern Recognition and Image Analysis: Advances in Mathematical Theory and Applications, ISSN 1054-6618, Vol. 14, no 2, p. 157-167Article in journal (Refereed)
    Abstract [en]

    Biomedical cell image analysis is one of the main application fields of computerized image analysis. This paper outlines the field and the different analysis steps related to it. Relative advantages of different approaches to the crucial step of image segmentation are discussed. Cell image segmentation can be seen as a modeling problem where different approaches are more or less explicitly based on cell models. For example, thresholding methods can be seen as being based on a model stating that cells have an intensity that is different from the surroundings. More robust segmentation can be obtained if a combination of features, such as intensity, edge gradients, and cellular shape, is used. The seeded watershed transform is proposed as the most useful tool for incorporating such features into the cell model. These concepts are illustrated by three real-world problems.

  • 259.
    Bengtsson, Lars
    et al.
    Halmstad University, School of Information Technology.
    Svensson, Bertil
    Halmstad University, School of Information Technology. Chalmers University of Technology, Gothenburg, Sweden.
    Wiberg, Per-Arne
    Halmstad University, School of Information Technology.
    Brains for Autonomous Robots: Hardware and Surgery Tools1994In: Proceedings of PerAc '94. From Perception to Action / [ed] P. Gaussier; J-D. Nicoud, Los Alamitos: IEEE, 1994, p. 436-439Conference paper (Refereed)
    Abstract [en]

    This paper presents a hardware architecture and a software tool needed for future autonomous robots. Specific attention is given to the execution of artificial neural networks and to the need for a good inspection and visualization tool when developing this kind of systems. Achievable performance using state-of-the-art technology is estimated and module miniaturization issues are discussed. © 1994 IEEE.

  • 260.
    Bengtsson, Ola
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Baerveldt, Albert-Jan
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Localization in changing environments by matching laser range scans1999In: 1999 Third European Workshop on Advanced Mobile Robots (Eurobot'99).: Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 1999, p. 169-176Conference paper (Refereed)
    Abstract [en]

    We present a novel scan matching algorithm, IDC-S, Iterative Dual Correspondence-Sector, that matches range scans. The algorithm is based on the known Iterative Dual Correspondence, IDC, algorithm which has shown good performance in real environments. The improvement is that IDC-S is able to deal with relatively large changes in the environment. It divides the scan in several sectors, detects and removes those sectors that are changed and matches the scans only using unchanged sectors. IDC-S and other variants of IDC are extensively simulated and evaluated. The simulations show that IDC-S is very robust and can locate in many different kind of environments. We also show that it is possible to effectively combine the existing IDC algorithms with IDC-S, thus obtaining an algorithm that performs very well both in rectilinear as well as nonrectilinear environments, even when changed as much as 65%. © 1999 IEEE.

  • 261.
    Benhamza, Hiba
    et al.
    Mohamed Khider University, DZA.
    Djeffal, Abdelhamid
    Mohamed Khider University, DZA.
    Cheddad, Abbas
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Image forgery detection review2021In: Proceedings - 2021 International Conference on Information Systems and Advanced Technologies, ICISAT 2021, Institute of Electrical and Electronics Engineers Inc. , 2021Conference paper (Refereed)
    Abstract [en]

    With the wide spread of digital document use in administrations, fabrication and use of forged documents have become a serious problem. This paper presents a study and classification of the most important works on image and document forgery detection. The classification is based on documents type, forgery type, detection method, validation dataset, evaluation metrics and obtained results. Most of existing forgery detection works are dealing with images and few of them analyze administrative documents and go deeper to analyze their contents. © 2021 IEEE.

    Download full text (pdf)
    fulltext
  • 262.
    Benkowski, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Technology, Department of Electrical Engineering, Electricity.
    Cutting Tool Container Inspection: Stereo vision and monocular artificial intelligence depth estimation at Sandvik Coromant2024Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis explores and evaluates solutions for the inspection of cutting tool containers at Sandvik Coromant, focusing on the transition from current vision systems utilizing infrared (IR) light to new methods compatible with recycled polypropylene (PP) plastic containers.

    The primary goal is to evaluate the effectiveness of stereo vision and artificial intelligence (AI) for depth estimation, ensuring that the containers are properly populated with cutting tools. Various methods and algorithms are tested to determine their accuracy and speed, to meet the time requirements of the production line at Sandvik Coromant.

    The results indicate that, while traditional IR-based systems excel in processing speed and robustness, monocular artificial intelligence methods offer adaptability that could be utilized with the new container material. Future work will involve further optimization and real-world testing to confirm these findings. 

    Download full text (pdf)
    fulltext
  • 263.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Detection and Tracking in Thermal Infrared Imagery2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Thermal cameras have historically been of interest mainly for military applications. Increasing image quality and resolution combined with decreasing price and size during recent years have, however, opened up new application areas. They are now widely used for civilian applications, e.g., within industry, to search for missing persons, in automotive safety, as well as for medical applications. Thermal cameras are useful as soon as it is possible to measure a temperature difference. Compared to cameras operating in the visual spectrum, they are advantageous due to their ability to see in total darkness, robustness to illumination variations, and less intrusion on privacy.

    This thesis addresses the problem of detection and tracking in thermal infrared imagery. Visual detection and tracking of objects in video are research areas that have been and currently are subject to extensive research. Indications oftheir popularity are recent benchmarks such as the annual Visual Object Tracking (VOT) challenges, the Object Tracking Benchmarks, the series of workshops on Performance Evaluation of Tracking and Surveillance (PETS), and the workshops on Change Detection. Benchmark results indicate that detection and tracking are still challenging problems.

    A common belief is that detection and tracking in thermal infrared imagery is identical to detection and tracking in grayscale visual imagery. This thesis argues that the preceding allegation is not true. The characteristics of thermal infrared radiation and imagery pose certain challenges to image analysis algorithms. The thesis describes these characteristics and challenges as well as presents evaluation results confirming the hypothesis.

    Detection and tracking are often treated as two separate problems. However, some tracking methods, e.g. template-based tracking methods, base their tracking on repeated specific detections. They learn a model of the object that is adaptively updated. That is, detection and tracking are performed jointly. The thesis includes a template-based tracking method designed specifically for thermal infrared imagery, describes a thermal infrared dataset for evaluation of template-based tracking methods, and provides an overview of the first challenge on short-term,single-object tracking in thermal infrared video. Finally, two applications employing detection and tracking methods are presented.

    Download full text (pdf)
    fulltext
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 264.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Learning to Analyze what is Beyond the Visible Spectrum2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Thermal cameras have historically been of interest mainly for military applications. Increasing image quality and resolution combined with decreasing camera price and size during recent years have, however, opened up new application areas. They are now widely used for civilian applications, e.g., within industry, to search for missing persons, in automotive safety, as well as for medical applications. Thermal cameras are useful as soon as there exists a measurable temperature difference. Compared to cameras operating in the visual spectrum, they are advantageous due to their ability to see in total darkness, robustness to illumination variations, and less intrusion on privacy.

    This thesis addresses the problem of automatic image analysis in thermal infrared images with a focus on machine learning methods. The main purpose of this thesis is to study the variations of processing required due to the thermal infrared data modality. In particular, three different problems are addressed: visual object tracking, anomaly detection, and modality transfer. All these are research areas that have been and currently are subject to extensive research. Furthermore, they are all highly relevant for a number of different real-world applications.

    The first addressed problem is visual object tracking, a problem for which no prior information other than the initial location of the object is given. The main contribution concerns benchmarking of short-term single-object (STSO) visual object tracking methods in thermal infrared images. The proposed dataset, LTIR (Linköping Thermal Infrared), was integrated in the VOT-TIR2015 challenge, introducing the first ever organized challenge on STSO tracking in thermal infrared video. Another contribution also related to benchmarking is a novel, recursive, method for semi-automatic annotation of multi-modal video sequences. Based on only a few initial annotations, a video object segmentation (VOS) method proposes segmentations for all remaining frames and difficult parts in need for additional manual annotation are automatically detected. The third contribution to the problem of visual object tracking is a template tracking method based on a non-parametric probability density model of the object's thermal radiation using channel representations.

    The second addressed problem is anomaly detection, i.e., detection of rare objects or events. The main contribution is a method for truly unsupervised anomaly detection based on Generative Adversarial Networks (GANs). The method employs joint training of the generator and an observation to latent space encoder, enabling stratification of the latent space and, thus, also separation of normal and anomalous samples. The second contribution is the previously unaddressed problem of obstacle detection in front of moving trains using a train-mounted thermal camera. Adaptive correlation filters are updated continuously and missed detections of background are treated as detections of anomalies, or obstacles. The third contribution to the problem of anomaly detection is a method for characterization and classification of automatically detected district heat leakages for the purpose of false alarm reduction.

    Finally, the thesis addresses the problem of modality transfer between thermal infrared and visual spectrum images, a previously unaddressed problem. The contribution is a method based on Convolutional Neural Networks (CNNs), enabling perceptually realistic transformations of thermal infrared to visual images. By careful design of the loss function the method becomes robust to image pair misalignments. The method exploits the lower acuity for color differences than for luminance possessed by the human visual system, separating the loss into a luminance and a chrominance part.

    Download full text (pdf)
    Learning to Analyze what is Beyond the Visible Spectrum
    Download (png)
    presentationsbild
  • 265.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Classifying district heating network leakages in aerial thermal imagery2014Conference paper (Other academic)
    Abstract [en]

    In this paper we address the problem of automatically detecting leakages in underground pipes of district heating networks from images captured by an airborne thermal camera. The basic idea is to classify each relevant image region as a leakage if its temperature exceeds a threshold. This simple approach yields a significant number of false positives. We propose to address this issue by machine learning techniques and provide extensive experimental analysis on real-world data. The results show that this postprocessing step significantly improves the usefulness of the system.

    Download full text (pdf)
    fulltext
  • 266.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A thermal infrared dataset for evaluation of short-term tracking methods2015Conference paper (Other academic)
    Abstract [en]

    During recent years, thermal cameras have decreased in both size and cost while improving image quality. The area of use for such cameras has expanded with many exciting applications, many of which require tracking of objects. While being subject to extensive research in the visual domain, tracking in thermal imagery has historically been of interest mainly for military purposes. The available thermal infrared datasets for evaluating methods addressing these problems are few and the ones that do are not challenging enough for today’s tracking algorithms. Therefore, we hereby propose a thermal infrared dataset for evaluation of short-term tracking methods. The dataset consists of 20 sequences which have been collected from multiple sources and the data format used is in accordance with the Visual Object Tracking (VOT) Challenge.

    Download full text (pdf)
    fulltext
  • 267.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A Thermal Object Tracking Benchmark2015Conference paper (Refereed)
    Abstract [en]

    Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.

    Download full text (pdf)
    fulltext
  • 268.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Channel Coded Distribution Field Tracking for Thermal Infrared Imagery2016In: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, p. 1248-1256Conference paper (Refereed)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. The fast progress has been possible thanks to the development of new template-based tracking methods with online template updates, methods which have not been explored for TIR tracking. Instead, tracking methods used for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a template-based tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. In order to avoid background contamination of the object template, we propose to exploit background information for the online template update and to adaptively select the object region used for tracking. Moreover, we propose a novel method for estimating object scale change. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Further, the proposed tracker, ABCD, and the VOT-TIR2015 winner SRDCFir are evaluated on maritime data. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

    Download full text (pdf)
    fulltext
  • 269.
    Berg, Amanda
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Enhanced analysis of thermographic images for monitoring of district heat pipe networks2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 83, no 2, p. 215-223Article in journal (Refereed)
    Abstract [en]

    We address two problems related to large-scale aerial monitoring of district heating networks. First, we propose a classification scheme to reduce the number of false alarms among automatically detected leakages in district heating networks. The leakages are detected in images captured by an airborne thermal camera, and each detection corresponds to an image region with abnormally high temperature. This approach yields a significant number of false positives, and we propose to reduce this number in two steps; by (a) using a building segmentation scheme in order to remove detections on buildings, and (b) to use a machine learning approach to classify the remaining detections as true or false leakages. We provide extensive experimental analysis on real-world data, showing that this post-processing step significantly improves the usefulness of the system. Second, we propose a method for characterization of leakages over time, i.e., repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss. We address the problem of finding trends in the degradation of pipe networks in order to plan for long-term maintenance, and propose a visualization scheme exploiting the consecutive data collections.

    Download full text (pdf)
    fulltext
  • 270.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Generating Visible Spectrum Images from Thermal Infrared2018In: Proceedings 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops CVPRW 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 1224-1233Conference paper (Refereed)
    Abstract [en]

    Transformation of thermal infrared (TIR) images into visual, i.e. perceptually realistic color (RGB) images, is a challenging problem. TIR cameras have the ability to see in scenarios where vision is severely impaired, for example in total darkness or fog, and they are commonly used, e.g., for surveillance and automotive applications. However, interpretation of TIR images is difficult, especially for untrained operators. Enhancing the TIR image display by transforming it into a plausible, visual, perceptually realistic RGB image presumably facilitates interpretation. Existing grayscale to RGB, so called, colorization methods cannot be applied to TIR images directly since those methods only estimate the chrominance and not the luminance. In the absence of conventional colorization methods, we propose two fully automatic TIR to visual color image transformation methods, a two-step and an integrated approach, based on Convolutional Neural Networks. The methods require neither pre- nor postprocessing, do not require any user input, and are robust to image pair misalignments. We show that the methods do indeed produce perceptually realistic results on publicly available data, which is assessed both qualitatively and quantitatively.

    Download full text (pdf)
    Generating Visible Spectrum Images from Thermal Infrared
  • 271.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Object Tracking in Thermal Infrared Imagery based on Channel Coded Distribution Fields2017Conference paper (Other academic)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

    Download full text (pdf)
    Object Tracking in Thermal Infrared Imagery based on Channel Coded Distribution Fields
  • 272.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Unsupervised Adversarial Learning of Anomaly Detection in the Wild2020In: Proceedings of the 24th European Conference on Artificial Intelligence (ECAI) / [ed] Giuseppe De Giacomo, Alejandro Catala, Bistra Dilkina, Michela Milano, Senén Barro, Alberto Bugarín, Jérôme Lang, Amsterdam: IOS Press, 2020, Vol. 325, p. 1002-1008Conference paper (Refereed)
    Abstract [en]

    Unsupervised learning of anomaly detection in high-dimensional data, such as images, is a challenging problem recently subject to intense research. Through careful modelling of the data distribution of normal samples, it is possible to detect deviant samples, so called anomalies. Generative Adversarial Networks (GANs) can model the highly complex, high-dimensional data distribution of normal image samples, and have shown to be a suitable approach to the problem. Previously published GAN-based anomaly detection methods often assume that anomaly-free data is available for training. However, this assumption is not valid in most real-life scenarios, a.k.a. in the wild. In this work, we evaluate the effects of anomaly contaminations in the training data on state-of-the-art GAN-based anomaly detection methods. As expected, detection performance deteriorates. To address this performance drop, we propose to add an additional encoder network already at training time and show that joint generator-encoder training stratifies the latent space, mitigating the problem with contaminated data. We show experimentally that the norm of a query image in this stratified latent space becomes a highly significant cue to discriminate anomalies from normal data. The proposed method achieves state-of-the-art performance on CIFAR-10 as well as on a large, previously untested dataset with cell images.

    Download full text (pdf)
    fulltext
  • 273.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Visual Spectrum Image Generation fromThermal Infrared2019Conference paper (Other academic)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 274.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    An Overview of the Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge2016Conference paper (Other academic)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking (VOT-TIR2015) Challenge was organized in conjunction with ICCV2015. It was the first benchmark on short-term,single-target tracking in thermal infrared (TIR) sequences. The challenge aimed at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. It was based on the VOT2013 Challenge, but introduced the following novelties: (i) the utilization of the LTIR (Linköping TIR) dataset, (ii) adaption of the VOT2013 attributes to thermal data, (iii) a similar evaluation to that of VOT2015. This paper provides an overview of the VOT-TIR2015 Challenge as well as the results of the 24 participating trackers.

    Download full text (pdf)
    fulltext
  • 275.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Johnander, Joakim
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Zenuity AB, Göteborg, Sweden.
    Durand de Gevigney, Flavie
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Grenoble INP, France.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Semi-automatic Annotation of Objects in Visual-Thermal Video2019In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Institute of Electrical and Electronics Engineers (IEEE), 2019Conference paper (Refereed)
    Abstract [en]

    Deep learning requires large amounts of annotated data. Manual annotation of objects in video is, regardless of annotation type, a tedious and time-consuming process. In particular, for scarcely used image modalities human annotationis hard to justify. In such cases, semi-automatic annotation provides an acceptable option.

    In this work, a recursive, semi-automatic annotation method for video is presented. The proposed method utilizesa state-of-the-art video object segmentation method to propose initial annotations for all frames in a video based on only a few manual object segmentations. In the case of a multi-modal dataset, the multi-modality is exploited to refine the proposed annotations even further. The final tentative annotations are presented to the user for manual correction.

    The method is evaluated on a subset of the RGBT-234 visual-thermal dataset reducing the workload for a human annotator with approximately 78% compared to full manual annotation. Utilizing the proposed pipeline, sequences are annotated for the VOT-RGBT 2019 challenge.

    Download full text (pdf)
    fulltext
  • 276.
    Berg, Martin
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Pose Recognition for Tracker Initialization Using 3D Models2008Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis it is examined whether the pose of an object can be determined by a system trained with a synthetic 3D model of said object. A number of variations of methods using P-channel representation are examined. Reference images are rendered from the 3D model, features, such as gradient orientation and color information are extracted and encoded into P-channels. The P-channel representation is then used to estimate an overlapping channel representation, using B1-spline functions, to estimate a density function for the feature set. Experiments were conducted with this representation as well as the raw P-channel representation in conjunction with a number of distance measures and estimation methods.

    It is shown that, with correct preprocessing and choice of parameters, the pose can be detected with some accuracy and, if not in real-time, fast enough to be useful in a tracker initialization scenario. It is also concluded that the success rate of the estimation depends heavily on the nature of the object.

    Download full text (pdf)
    FULLTEXT01
  • 277.
    Bergekrans, William
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems.
    Automatic Man Overboard Detection with an RGB Camera: Using convolutional neural networks2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Man overboard is one of the most common and dangerous accidents that can occur whentraveling on a boat. Available research on man overboard systems with cameras have focusedon man overboard taking place from larger ships, which involves a fall from a height.Recreational boat manufacturers often use cord-based kill switches that turns of the engineif the wearer falls overboard. The aim of this thesis is to create a man overboard warningsystem based on state-of-the-art object detection models that can detect man overboard situationthrough inputs from a camera. Awell performing warning system would allow boatmanufactures to comply with safety regulations and expand the kill-switch coverage to allpassengers on the boat. Furthermore, the aim is also to create two new datasets: one dedicatedto human detection and one with man overboard fall sequences. YOLOv5 achievedthe highest performance on a new human detection dataset, with an average precision of97%. A Mobilenet-SSD-v1 network based on weights from training on the PASCAL VOCdataset and additional training on the new man overboard dataset is used as the detectionmodel in final warning system. The man overboard warning system achieves an accuracyof 50% at best, with a precision of 58% and recall of 78%.

    Download full text (pdf)
    bergekrans-man-overboard-detection
  • 278. Bergenhem, Carl
    et al.
    Pettersson, Henrik
    Coelingh, Erik
    Englund, Cristofer
    RISE, Swedish ICT, Viktoria.
    Shladover, Steven
    Tsugawa, Sadayuki
    Adolfsson, Magnus
    Overview of platooning systems2012In: Proceedings of the 19th ITS World Congress, 2012, p. 1-7Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of current projects that deal with vehicle platooning. The platooning concept can be defined as a collection of vehicles that travel together, actively coordinated in formation. Some expected advantages of platooning include increased fuel and traffic efficiency, safety and driver comfort. There are many variations of the details of the concept such as: the goals of platooning, how it is implemented, mix of vehicles, the requirements on infrastructure, what is automated (longitudinal and lateral control) and to what level. The following projects are presented: SARTRE – a European platooning project; PATH – a California traffic automation program that includes platooning; GCDC – a cooperative driving initiative, SCANIA platooning and; Energy ITS – a Japanese truck platooning project.

  • 279.
    Berger, Cyrille
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, The Institute of Technology.
    Colour perception graph for characters segmentation2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Ryan McMahan, Jason Jerald, Hui Zhang, Steven M. Drucker, Chandra Kambhamettu, Maha El Choubassi, Zhigang Deng, Mark Carlson, Springer, 2014, p. 598-608Conference paper (Refereed)
    Abstract [en]

    Characters recognition in natural images is a challenging problem, asit involves segmenting characters of various colours on various background. Inthis article, we present a method for segmenting images that use a colour percep-tion graph. Our algorithm is inspired by graph cut segmentation techniques andit use an edge detection technique for filtering the graph before the graph-cut aswell as merging segments as a final step. We also present both qualitative andquantitative results, which show that our algorithm perform at slightly better andfaster to a state of the art algorithm.

    Download full text (pdf)
    fulltext
  • 280.
    Berger, Cyrille
    Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS), l'Université Toulouse, France.
    Perception de la géométrie de l'environment pour la navigation autonome2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The goal of the mobile robotic research is to give robots the capability to accomplish missions in an environment that might be unknown. To accomplish his mission, the robot need to execute a given set of elementary actions (movement, manipulation of objects...) which require an accurate localisation of the robot, as well as a the construction of good geometric model of the environment. Thus, a robot will need to take the most out of his own sensors, of external sensors, of information coming from an other robot and of existing model coming from a Geographic Information System. The common information is the geometry of the environment. The first part of the presentation will be about the different methods to extract geometric information. The second part will be about the creation of the geometric model using a graph structure, along with a method to retrieve information in the graph to allow the robot to localise itself in the environment.

  • 281.
    Berger, Cyrille
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, The Institute of Technology.
    Strokes detection for skeletonisation of characters shapes2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part II / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Ryan McMahan, Jason Jerald, Hui Zhang, Steven M. Drucker, Chandra Kambhamettu, Maha El Choubassi, Zhigang Deng, Mark Carlson, Springer, 2014, p. 510-520Conference paper (Refereed)
    Abstract [en]

    Skeletonisation is a key process in character recognition in natural images. Under the assumption that a character is made of a stroke of uniform colour, with small variation in thickness, the process of recognising characters can be decomposed in the three steps. First the image is segmented, then each segment is transformed into a set of connected strokes (skeletonisation), which are then abstracted in a descriptor that can be used to recognise the character. The main issue with skeletonisation is the sensitivity with noise, and especially, the presence of holes in the masks. In this article, a new method for the extraction of strokes is presented, which address the problem of holes in the mask and does not use any parameters.

    Download full text (pdf)
    fulltext
  • 282.
    Berger, Cyrille
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Toward rich geometric map for SLAM: Online Detection of Planes in 2D LIDAR2012In: Proceedings of the International Workshop on Perception for Mobile Robots Autonomy (PEMRA), 2012Conference paper (Refereed)
    Abstract [en]

    Rich geometric models of the environment are needed for robots to accomplish their missions. However a robot operatingin a large environment would require a compact representation.

    In this article, we present a method that relies on the idea that a plane appears as a line segment in a 2D scan, andthat by tracking those lines frame after frame, it is possible to estimate the parameters of that plane. The method istherefore divided in three steps: fitting line segments on the points of the 2D scan, tracking those line segments inconsecutive scan and estimating the parameters with a graph based SLAM (Simultaneous Localisation And Mapping)algorithm.

    Download full text (pdf)
    article
  • 283.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Lacroix, Simon
    LAAS, France.
    DSeg: Direct Line Segments Detection2023Report (Other academic)
    Abstract [en]

    This paper presents a model-driven approach to detect image line segments. The approach incrementally detects segments on the gradient image using a linear Kalman filter that estimates the supporting line parameters and their associated variances. The algorithm is fast and robust with respect to image noise and illumination variations, it allows the detection of longer line segments than data-driven approaches, and does not require any tedious parameters tuning. An extension of the algorithm that exploits a pyramidal approach to enhance the quality of results is proposed. Results with varying scene illumination and comparisons to classic existing approaches are presented.

  • 284.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Lacroix, Simon
    LAAS.
    DSeg: Détection directe de segments dans une image2010In: 17ème congrès francophone AFRIF-AFIA Reconnaissance des Formes et Intelligence Artificielle (RFIA), 2010Conference paper (Refereed)
    Abstract [en]

    This paper presents a model-driven approach todetect image line segments. The approach incrementally detects segments on thegradient image using a linear Kalman filter that estimates the supporting lineparameters and their associated variances. The algorithms are fast and robustwith respect to image noise and illumination variations, they allow thedetection of longer line segments than data-driven approaches, and do notrequire any tedious parameters tuning. Results with varying scene illuminationand comparisons to classic existing approaches are presented.

  • 285.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Lacroix, Simon
    LAAS.
    Modélisation de l'environnement par facettes planes pour la Cartographie et la Localisation Simultanées par stéréovision2008In: Reconnaissance des Formes et Intelligence Artificielle (RFIA), 2008Conference paper (Refereed)
  • 286.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Lacroix, Simon
    LAAS/CNRS, Univ. of Toulouse, Toulouse, France.
    Using planar facets for stereovision SLAM2008In: Proceedings of the IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), IEEE conference proceedings, 2008, p. 1606-1611Conference paper (Refereed)
    Abstract [en]

    In the context of stereovision SLAM, we propose a way to enrich the landmark models. Vision-based SLAM approaches usually rely on interest points associated to a point in the Cartesian space: by adjoining oriented planar patches (if they are present in the environment), we augment the landmark description with an oriented frame. Thanks to this additional information, the robot pose is fully observable with the perception of a single landmark, and the knowledge of the patches orientation helps the matching of landmarks. The paper depicts the chosen landmark model, the way to extract and match them, and presents some SLAM results obtained with such landmarks.

  • 287.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Rudol, Piotr
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Wzorek, Mariusz
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Kleiner, Alexander
    iRobot, Pasadena, CA, USA.
    Evaluation of Reactive Obstacle Avoidance Algorithms for a Quadcopter2016In: Proceedings of the 14th International Conference on Control, Automation, Robotics and Vision 2016 (ICARCV), IEEE conference proceedings, 2016, article id Tu31.3Conference paper (Refereed)
    Abstract [en]

    In this work we are investigating reactive avoidance techniques which can be used on board of a small quadcopter and which do not require absolute localisation. We propose a local map representation which can be updated with proprioceptive sensors. The local map is centred around the robot and uses spherical coordinates to represent a point cloud. The local map is updated using a depth sensor, the Inertial Measurement Unit and a registration algorithm. We propose an extension of the Dynamic Window Approach to compute a velocity vector based on the current local map. We propose to use an OctoMap structure to compute a 2-pass A* which provide a path which is converted to a velocity vector. Both approaches are reactive as they only make use of local information. The algorithms were evaluated in a simulator which offers a realistic environment, both in terms of control and sensors. The results obtained were also validated by running the algorithms on a real platform.

  • 288.
    Bergholm, Fredrik
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    The Plenoscope Concept and Image Formation2002In: Proceedings of SSAB 2002, 2002, p. 75-78Conference paper (Other scientific)
  • 289.
    Bergholm, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Adler, Jeremy
    Parmryd, Ingela
    Analysis of Bias in the Apparent Correlation Coefficient Between Image Pairs Corrupted by Severe Noise2010In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 37, no 3, p. 204-219Article in journal (Refereed)
    Abstract [en]

    The correlation coefficient r is a measure of similarity used to compare regions of interest in image pairs. In fluorescence microscopy there is a basic tradeoff between the degree of image noise and the frequency with which images can be acquired and therefore the ability to follow dynamic events. The correlation coefficient r is commonly used in fluorescence microscopy for colocalization measurements, when the relative distributions of two fluorophores are of interest. Unfortunately, r is known to be biased understating the true correlation when noise is present. A better measure of correlation is needed. This article analyses the expected value of r and comes up with a procedure for evaluating the bias of r, expected value formulas. A Taylor series of so-called invariant factors is analyzed in detail. These formulas indicate ways to correct r and thereby obtain a corrected value free from the influence of noise that is on average accurate (unbiased). One possible correction is the attenuated corrected correlation coefficient R, introduced heuristically by Spearman (in Am. J. Psychol. 15:72-101, 1904). An ideal correction formula in terms of expected values is derived. For large samples R tends towards the ideal correction formula and the true noise-free correlation. Correlation measurements using simulation based on the types of noise found in fluorescence microscopy images illustrate both the power of the method and the variance of R. We conclude that the correction formula is valid and is particularly useful for making correct analyses from very noisy datasets.

  • 290.
    Berglund, Christian
    et al.
    Växjö University, Faculty of Humanities and Social Sciences, School of Education.
    Hjelm, Anna
    Växjö University, Faculty of Humanities and Social Sciences, School of Education.
    Integrering av elevers visuella intressen i skolundervisningen2008Student thesis
  • 291.
    Bergman, David
    Swedish Defence University, Institutionen för ledarskap och ledning, Leadership and Command & Control Division Stockholm.
    Mjölken spillde ut sig: artificiell intelligens, etik och autonoma vapensystem2023In: Drönare/UAS: Teknik och förmågor / [ed] Stig Rydell; Mats Olofsson, Stockholm: Kungl Krigsvetenskapsakademien , 2023, p. 61-74Chapter in book (Other academic)
  • 292.
    Bergman, Lars
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Intelligent Monitoring of the Offset Printing Process2004In: Proceedings of the IASTED International Conference on Neural Networks and Computational Intelligence, ACTA Press, 2004, p. 173-178Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a neural networks and image analysis based approach to assessing colour deviations in an offset printing process from direct measurements on halftone multicoloured pictures--there are no measuring areas printed solely to assess the deviations. A committee of neural networks is trained to assess the ink proportions in a small image area. From only one measurement the trained committee is capable of estimating the actual amount of printing inks dispersed on paper in the measuring area. To match the measured image area of the printed picture with the corresponding area of the original image, when comparing the actual ink proportions with the targeted ones, properties of the 2-D Fourier transform are exploited.

  • 293.
    Bergman, Lars
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Verikas, Antanas
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bacauskiene, M.
    Department of Applied Electronics, Kaunas University of Technology.
    Unsupervised colour image segmentation applied to printing quality assessment2005In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 23, no 4, p. 417-425Article in journal (Refereed)
    Abstract [en]

    We present an option for colour image segmentation applied to printing quality assessment in offset lithographic printing by measuring an average ink dot size in halftone pictures. The segmentation is accomplished in two stages through classification of image pixels. In the first stage, rough image segmentation is performed. The results of the first segmentation stage are then utilized to collect a balanced training data set for learning refined parameters of the decision rules. The developed software is successfully used in a printing shop to assess the ink dot size on paper and printing plates.

  • 294.
    Bergnéhr, Leo
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Segmentation and Alignment of 3-D Transaxial Myocardial Perfusion Images and Automatic Dopamin Transporter Quantification2008Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nuclear medical imaging such as SPECT (Single Photon Emission Tomography) is an imaging modality which is readily used in many applications for measuring physiological properties of the human body. One very common type of examination using SPECT is when measuring myocardial perfusion (blood flow in the heart tissue), which is often used to examine e.g. a possible myocardial infarction (heart attack). In order for doctors to give a qualitative diagnose based on these images, the images must first be segmented and rotated by a medical technologist. This is performed due to the fact that the heart of different patients, or for patients at different times of examination, is not situated and rotated equally, which is an essential assumption for the doctor when examining the images. Consequently, as different technologists with different amount of experience and expertise will rotate images differently, variability between operators arises and can often become a problem in the process of diagnosing.

    Another type of nuclear medical examination is when quantifying dopamine transporters in the basal ganglia in the brain. This is commonly done for patients showing symptoms of Parkinson’s disease or similar diseases. In order to specify the severity of the disease, a scheme for calculating different fractions between parts of the dopamine transporter area is often used. This is tedious work for the person performing the quantification, and despite the acquired three dimensional images, quantification is too often performed on one or more slices of the image volume. In resemblance with myocardial perfusion examinations, variability between different operators can also here present a possible source of errors.

    In this thesis, a novel method for automatically segmenting the left ventricle of the heart in SPECT-images is presented. The segmentation is based on an intensity-invariant local-phase based approach, thus removing the difficulty of the commonly varying intensity in myocardial perfusion images. Additionally, the method is used to estimate the angle of the left ventricle of the heart. Furthermore, the method is slightly adjusted, and a new approach on automatically quantifying dopamine transporters in the basal ganglia using the DaTSCAN radiotracer is proposed.

    Download full text (pdf)
    FULLTEXT01
  • 295.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Interactive Perception: From Scenes to Objects2012Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis builds on the observation that robots, like humans, do not have enough experience to handle all situations from the start. Therefore they need tools to cope with new situations, unknown scenes and unknown objects. In particular, this thesis addresses objects. How can a robot realize what objects are if it looks at a scene and has no knowledge about objects? How can it recover from situations where its hypotheses about what it sees are wrong? Even if it has built up experience in form of learned objects, there will be situations where it will be uncertain or mistaken, and will therefore still need the ability to correct errors. Much of our daily lives involves interactions with objects, and the same will be true robots existing among us. Apart from being able to identify individual objects, the robot will therefore need to manipulate them.

    Throughout the thesis, different aspects of how to deal with these questions is addressed. The focus is on the problem of a robot automatically partitioning a scene into its constituting objects. It is assumed that the robot does not know about specific objects, and is therefore considered inexperienced. Instead a method is proposed that generates object hypotheses given visual input, and then enables the robot to recover from erroneous hypotheses. This is done by the robot drawing from a human's experience, as well as by enabling it to interact with the scene itself and monitoring if the observed changes are in line with its current beliefs about the scene's structure.

    Furthermore, the task of object manipulation for unknown objects is explored. This is also used as a motivation why the scene partitioning problem is essential to solve. Finally aspects of monitoring the outcome of a manipulation is investigated by observing the evolution of flexible objects in both static and dynamic scenes. All methods that were developed for this thesis have been tested and evaluated on real robotic platforms. These evaluations show the importance of having a system capable of recovering from errors and that the robot can take advantage of human experience using just simple commands.

    Download full text (pdf)
    thesis_niklas_bergstrom
  • 296.
    Bergström, Niklas
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Roberson-Johnson, Matthew
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kootstra, Gert
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active Scene Analysis2010Conference paper (Refereed)
  • 297.
    Bergström, Niklas
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Yamakawa, Yuji
    Senoo, Taku
    Ishikawa, Masatoshi
    On-line learning of temporal state models for flexible objects2012In: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2012, p. 712-718Conference paper (Refereed)
    Abstract [en]

    State estimation and control are intimately related processes in robot handling of flexible and articulated objects. While for rigid objects, we can generate a CAD model before-hand and a state estimation boils down to estimation of pose or velocity of the object, in case of flexible and articulated objects, such as a cloth, the representation of the object's state is heavily dependent on the task and execution. For example, when folding a cloth, the representation will mainly depend on the way the folding is executed.

    Download full text (pdf)
    bergstrom12humanoids
  • 298.
    Bergström, Niklas
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Yamakawa, Yuji
    Tokyo University.
    Senoo, Taku
    Tokyo University.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ishikawa, Masatoshi
    Tokyo University.
    State Recognition of Deformable Objects Using Shape Context2011In: The 29th Annual Conference of the Robotics Society of Japan, 2011Conference paper (Other academic)
  • 299.
    Bernander, Karl B.
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Gustavsson, Kenneth
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Selig, Bettina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Sintorn, Ida-Maria
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Luengo Hendriks, Cris L.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Improving the stochastic watershed2013In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 34, no 9, p. 993-1000Article in journal (Refereed)
    Abstract [en]

    The stochastic watershed is an unsupervised segmentation tool recently proposed by Angulo and Jeulin. By repeated application of the seeded watershed with randomly placed markers, a probability density function for object boundaries is created. In a second step, the algorithm then generates a meaningful segmentation of the image using this probability density function. The method performs best when the image contains regions of similar size, since it tends to break up larger regions and merge smaller ones. We propose two simple modifications that greatly improve the properties of the stochastic watershed: (1) add noise to the input image at every iteration, and (2) distribute the markers using a randomly placed grid. The noise strength is a new parameter to be set, but the output of the algorithm is not very sensitive to this value. In return, the output becomes less sensitive to the two parameters of the standard algorithm. The improved algorithm does not break up larger regions, effectively making the algorithm useful for a larger class of segmentation problems.

    Download full text (pdf)
    fulltext
  • 300.
    Bernard, Florian
    et al.
    MPI Informatics, Saarland Informatics Campus, Saarbrücken, Germany.
    Thunberg, Johan
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Goncalves, Jorge
    LCSB, University of Luxembourg, Esch-sur-Alzette, Luxembourg.
    Theobalt, Christian
    MPI Informatics, Saarland Informatics Campus, Saarbrücken, Germany.
    Synchronisation of partial multi-matchings via non-negative factorisations2019In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 92, p. 146-155Article in journal (Refereed)
    Abstract [en]

    In this work we study permutation synchronisation for the challenging case of partial permutations, which plays an important role for the problem of matching multiple objects (e.g. images or shapes). The term synchronisation refers to the property that the set of pairwise matchings is cycle-consistent, i.e. in the full matching case all compositions of pairwise matchings over cycles must be equal to the identity. Motivated by clustering and matrix factorisation perspectives of cycle-consistency, we derive an algo- rithm to tackle the permutation synchronisation problem based on non-negative factorisations. In order to deal with the inherent non-convexity of the permutation synchronisation problem, we use an initialisation procedure based on a novel rotation scheme applied to the solution of the spectral relaxation. Moreover, this rotation scheme facilitates a convenient Euclidean projection to obtain a binary solution after solving our relaxed problem. In contrast to state-of-the-art methods, our approach is guaranteed to produce cycle-consistent results. We experimentally demonstrate the efficacy of our method and show that it achieves better results compared to existing methods. © 2019 Elsevier Ltd

3456789 251 - 300 of 3122
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf