Change search
Refine search result
1234567 151 - 200 of 1726
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Bengtsson, E.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    The technical development in the ICT-field2000In: IT at school between vision and practice - a research overview, 2000, p. 39-55Chapter in book (Other scientific)
  • 152.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of 3D images of molecules, cells, tissues and organs2007In: Medicinteknikdagarna 2007, 2007, p. 1-Conference paper (Other scientific)
    Abstract [en]

    Our world is three dimensional. With our eyes we mainly see the surfaces of 3D objects and in conventional imaging we see projections of parts of the 3D world down to 2D. But over the last decades new imaging techniques such as tomography and confocal microscopy have evolved that make true 3D volume images available,. These images can reveal information about the inner properties and conditions of objects, e.g. our bodies, that can be of immense value to science and medicine. But to really explore the information in these images we need computer support.

    At the Centre for Image Analysis in Uppsala we are developing methods for the analysis and visualisation of volume images. A nice aspect of image processing methods is that they in most cases are independent of the scale in the images. In this presentation we will give examples of how images of widely different scales can be analysed and visualised.

    - At the highest resolution we have images of protein molecules created by cryo-electron tomography with voxels of a few nanometers.

    - Using confocal microscopy we can also image single molecules, but then only seeing them as bright spots that need to be localized at micrometer scales in the cells.

    - The cells build up tissue and using conventional pathology stains or micro CT we can image the tissue in 2D and 3D. We are using such images to develop methods for studying tissue integration of implants.

    - Finally conventional X-ray tomography and magnetic resonance tomography provide images on the organ level with voxels in the millimetre range. We are developing methods for liver segmentation in CT data and visualising the contrast uptake over time in MR angiography images of breasts.

  • 153.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computerized Cell Image Analysis: Past, Present and Future2003Conference paper (Refereed)
  • 154.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computerized Cell Image Processing in Healthcare2005In: Proceedings of Healthcomm2005, 2005, p. 11-17Conference paper (Refereed)
    Abstract [en]

    The visual interpretation of images is at the core of most medical diagnostic procedures and the final decision for many diseases, including cancer, is based on microscopic examination of cells and tissues. Through screening of cell samples the incidence and mortality of cervical cancer have been reduced significantly. The visual interpretation is, however, tedious and in many cases error-prone. Therefore many attempts have been made at using the computer to supplement or replace the human visual inspection by computer analysis and to automate some of the more tedious visual screening tasks. The computers and computer networks have also been used to manage, store, transmit and display images of cells and tissues making it possible to visually analyze cells from remote locations. In this presentation these developments are traced from their very beginning through the present situation and into the future.

  • 155.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Recognizing signs of malignancy: The quest for computer assisted cancer screening and diagnosis systems2010In: International Conference on Computational Intelligence and Computing Research (ICCIC), 2010 IEEE, Coimbatore, India: IEEE Digital Library , 2010, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Almost all cancers are diagnosed through visual examination of microscopic tissue samples. Visual screening of cell samples, so called PAP-smears, has drastically reduced the incidence of cervical cancers in countries that have implemented population wide screening programs. But the visual examination is tedious, subjective and expensive. There has therefore been much research aiming for computer assisted or automated cell image analysis systems for cancer detection and diagnosis. Progress has been made but still most of cytology and pathology is done visually. In this presentation I will discuss some of the major issues involved, examine some of the proposed solutions and give some comments about the state of the art.

  • 156.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Curic, VladimirUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Nyström, IngelaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Strand, RobinUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wadelius, LenaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wernersson, ErikUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Centre for Image Analysis Annual Report 20092010Collection (editor) (Other academic)
  • 157.
    Bengtsson, Ewert
    et al.
    Uppsala University.
    Dahlqvist, Bengt
    Uppsala University.
    Eriksson, Olle
    Uppsala University.
    Nordin, Bo
    Uppsala University.
    Jarkrans, Torsten
    Uppsala University.
    Stenkvist, Björn
    Computer-assisted Scanning Microscopy in Cytology1982In: Proceedings of the IEEE International Symposium on Medical Imaging and Image Interpretation, 1982, p. 497-503Conference paper (Refereed)
  • 158.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Norell, KristinUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Nyström, IngelaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wadelius, LenaUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.Wernersson, ErikUppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Annual Report 20082009Collection (editor) (Other academic)
  • 159.
    Bengtsson, Ewert
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Rodenacker, Karsten
    A feature set for cytometry on digitized microscopic images2003In: Analytical Cellular Pathology, Vol. 24, no 1, p. 1-36Article in journal (Refereed)
  • 160.
    Bengtsson Ewert, Wählby Carolina, Lindblad Joakim
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Robust Cell Image Segmentation Methods2003Conference paper (Refereed)
  • 161.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Robust cell image segmentation methods2004In: Pattern Recognition and Image Analysis: Advances in Mathematical Theory and Applications, ISSN 1054-6618, Vol. 14, no 2, p. 157-167Article in journal (Refereed)
    Abstract [en]

    Biomedical cell image analysis is one of the main application fields of computerized image analysis. This paper outlines the field and the different analysis steps related to it. Relative advantages of different approaches to the crucial step of image segmentation are discussed. Cell image segmentation can be seen as a modeling problem where different approaches are more or less explicitly based on cell models. For example, thresholding methods can be seen as being based on a model stating that cells have an intensity that is different from the surroundings. More robust segmentation can be obtained if a combination of features, such as intensity, edge gradients, and cellular shape, is used. The seeded watershed transform is proposed as the most useful tool for incorporating such features into the cell model. These concepts are illustrated by three real-world problems.

  • 162.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Detection and Tracking in Thermal Infrared Imagery2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Thermal cameras have historically been of interest mainly for military applications. Increasing image quality and resolution combined with decreasing price and size during recent years have, however, opened up new application areas. They are now widely used for civilian applications, e.g., within industry, to search for missing persons, in automotive safety, as well as for medical applications. Thermal cameras are useful as soon as it is possible to measure a temperature difference. Compared to cameras operating in the visual spectrum, they are advantageous due to their ability to see in total darkness, robustness to illumination variations, and less intrusion on privacy.

    This thesis addresses the problem of detection and tracking in thermal infrared imagery. Visual detection and tracking of objects in video are research areas that have been and currently are subject to extensive research. Indications oftheir popularity are recent benchmarks such as the annual Visual Object Tracking (VOT) challenges, the Object Tracking Benchmarks, the series of workshops on Performance Evaluation of Tracking and Surveillance (PETS), and the workshops on Change Detection. Benchmark results indicate that detection and tracking are still challenging problems.

    A common belief is that detection and tracking in thermal infrared imagery is identical to detection and tracking in grayscale visual imagery. This thesis argues that the preceding allegation is not true. The characteristics of thermal infrared radiation and imagery pose certain challenges to image analysis algorithms. The thesis describes these characteristics and challenges as well as presents evaluation results confirming the hypothesis.

    Detection and tracking are often treated as two separate problems. However, some tracking methods, e.g. template-based tracking methods, base their tracking on repeated specific detections. They learn a model of the object that is adaptively updated. That is, detection and tracking are performed jointly. The thesis includes a template-based tracking method designed specifically for thermal infrared imagery, describes a thermal infrared dataset for evaluation of template-based tracking methods, and provides an overview of the first challenge on short-term,single-object tracking in thermal infrared video. Finally, two applications employing detection and tracking methods are presented.

  • 163.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB Linköping, Sweden.
    Classifying district heating network leakages in aerial thermal imagery2014Conference paper (Other academic)
    Abstract [en]

    In this paper we address the problem of automatically detecting leakages in underground pipes of district heating networks from images captured by an airborne thermal camera. The basic idea is to classify each relevant image region as a leakage if its temperature exceeds a threshold. This simple approach yields a significant number of false positives. We propose to address this issue by machine learning techniques and provide extensive experimental analysis on real-world data. The results show that this postprocessing step significantly improves the usefulness of the system.

  • 164.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A thermal infrared dataset for evaluation of short-term tracking methods2015Conference paper (Other academic)
    Abstract [en]

    During recent years, thermal cameras have decreased in both size and cost while improving image quality. The area of use for such cameras has expanded with many exciting applications, many of which require tracking of objects. While being subject to extensive research in the visual domain, tracking in thermal imagery has historically been of interest mainly for military purposes. The available thermal infrared datasets for evaluating methods addressing these problems are few and the ones that do are not challenging enough for today’s tracking algorithms. Therefore, we hereby propose a thermal infrared dataset for evaluation of short-term tracking methods. The dataset consists of 20 sequences which have been collected from multiple sources and the data format used is in accordance with the Visual Object Tracking (VOT) Challenge.

  • 165.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    A Thermal Object Tracking Benchmark2015Conference paper (Refereed)
    Abstract [en]

    Short-term single-object (STSO) tracking in thermal images is a challenging problem relevant in a growing number of applications. In order to evaluate STSO tracking algorithms on visual imagery, there are de facto standard benchmarks. However, we argue that tracking in thermal imagery is different than in visual imagery, and that a separate benchmark is needed. The available thermal infrared datasets are few and the existing ones are not challenging for modern tracking algorithms. Therefore, we hereby propose a thermal infrared benchmark according to the Visual Object Tracking (VOT) protocol for evaluation of STSO tracking methods. The benchmark includes the new LTIR dataset containing 20 thermal image sequences which have been collected from multiple sources and annotated in the format used in the VOT Challenge. In addition, we show that the ranking of different tracking principles differ between the visual and thermal benchmarks, confirming the need for the new benchmark.

  • 166.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering.
    Channel Coded Distribution Field Tracking for Thermal Infrared Imagery2016In: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, p. 1248-1256Conference paper (Refereed)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. The fast progress has been possible thanks to the development of new template-based tracking methods with online template updates, methods which have not been explored for TIR tracking. Instead, tracking methods used for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a template-based tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. In order to avoid background contamination of the object template, we propose to exploit background information for the online template update and to adaptively select the object region used for tracking. Moreover, we propose a novel method for estimating object scale change. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Further, the proposed tracker, ABCD, and the VOT-TIR2015 winner SRDCFir are evaluated on maritime data. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 167.
    Berg, Amanda
    et al.
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Electrical Engineering, Computer Vision. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Syst Tekn AB, Diskettgatan 11 B, SE-58335 Linkoping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Enhanced analysis of thermographic images for monitoring of district heat pipe networks2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 83, no 2, p. 215-223Article in journal (Refereed)
    Abstract [en]

    We address two problems related to large-scale aerial monitoring of district heating networks. First, we propose a classification scheme to reduce the number of false alarms among automatically detected leakages in district heating networks. The leakages are detected in images captured by an airborne thermal camera, and each detection corresponds to an image region with abnormally high temperature. This approach yields a significant number of false positives, and we propose to reduce this number in two steps; by (a) using a building segmentation scheme in order to remove detections on buildings, and (b) to use a machine learning approach to classify the remaining detections as true or false leakages. We provide extensive experimental analysis on real-world data, showing that this post-processing step significantly improves the usefulness of the system. Second, we propose a method for characterization of leakages over time, i.e., repeating the image acquisition one or a few years later and indicate areas that suffer from an increased energy loss. We address the problem of finding trends in the degradation of pipe networks in order to plan for long-term maintenance, and propose a visualization scheme exploiting the consecutive data collections.

  • 168.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Generating Visible Spectrum Images from Thermal Infrared2018Conference paper (Refereed)
    Abstract [en]

    Transformation of thermal infrared (TIR) images into visual, i.e. perceptually realistic color (RGB) images, is a challenging problem. TIR cameras have the ability to see in scenarios where vision is severely impaired, for example in total darkness or fog, and they are commonly used, e.g., for surveillance and automotive applications. However, interpretation of TIR images is difficult, especially for untrained operators. Enhancing the TIR image display by transforming it into a plausible, visual, perceptually realistic RGB image presumably facilitates interpretation. Existing grayscale to RGB, so called, colorization methods cannot be applied to TIR images directly since those methods only estimate the chrominance and not the luminance. In the absence of conventional colorization methods, we propose two fully automatic TIR to visual color image transformation methods, a two-step and an integrated approach, based on Convolutional Neural Networks. The methods require neither pre- nor postprocessing, do not require any user input, and are robust to image pair misalignments. We show that the methods do indeed produce perceptually realistic results on publicly available data, which is assessed both qualitatively and quantitatively.

  • 169.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Object Tracking in Thermal Infrared Imagery based on Channel Coded Distribution Fields2017Conference paper (Other academic)
    Abstract [en]

    We address short-term, single-object tracking, a topic that is currently seeing fast progress for visual video, for the case of thermal infrared (TIR) imagery. Tracking methods designed for TIR are often subject to a number of constraints, e.g., warm objects, low spatial resolution, and static camera. As TIR cameras become less noisy and get higher resolution these constraints are less relevant, and for emerging civilian applications, e.g., surveillance and automotive safety, new tracking methods are needed. Due to the special characteristics of TIR imagery, we argue that template-based trackers based on distribution fields should have an advantage over trackers based on spatial structure features. In this paper, we propose a templatebased tracking method (ABCD) designed specifically for TIR and not being restricted by any of the constraints above. The proposed tracker is evaluated on the VOT-TIR2015 and VOT2015 datasets using the VOT evaluation toolkit and a comparison of relative ranking of all common participating trackers in the challenges is provided. Experimental results show that the ABCD tracker performs particularly well on thermal infrared sequences.

  • 170.
    Berg, Amanda
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Häger, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    An Overview of the Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge2016Conference paper (Other academic)
    Abstract [en]

    The Thermal Infrared Visual Object Tracking (VOT-TIR2015) Challenge was organized in conjunction with ICCV2015. It was the first benchmark on short-term,single-target tracking in thermal infrared (TIR) sequences. The challenge aimed at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. It was based on the VOT2013 Challenge, but introduced the following novelties: (i) the utilization of the LTIR (Linköping TIR) dataset, (ii) adaption of the VOT2013 attributes to thermal data, (iii) a similar evaluation to that of VOT2015. This paper provides an overview of the VOT-TIR2015 Challenge as well as the results of the 24 participating trackers.

  • 171.
    Berg, Martin
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Pose Recognition for Tracker Initialization Using 3D Models2008Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis it is examined whether the pose of an object can be determined by a system trained with a synthetic 3D model of said object. A number of variations of methods using P-channel representation are examined. Reference images are rendered from the 3D model, features, such as gradient orientation and color information are extracted and encoded into P-channels. The P-channel representation is then used to estimate an overlapping channel representation, using B1-spline functions, to estimate a density function for the feature set. Experiments were conducted with this representation as well as the raw P-channel representation in conjunction with a number of distance measures and estimation methods.

    It is shown that, with correct preprocessing and choice of parameters, the pose can be detected with some accuracy and, if not in real-time, fast enough to be useful in a tracker initialization scenario. It is also concluded that the success rate of the estimation depends heavily on the nature of the object.

  • 172. Bergenhem, Carl
    et al.
    Pettersson, Henrik
    Coelingh, Erik
    Englund, Cristofer
    RISE, Swedish ICT, Viktoria.
    Shladover, Steven
    Tsugawa, Sadayuki
    Adolfsson, Magnus
    Overview of platooning systems2012In: Proceedings of the 19th ITS World Congress, 2012, p. 1-7Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of current projects that deal with vehicle platooning. The platooning concept can be defined as a collection of vehicles that travel together, actively coordinated in formation. Some expected advantages of platooning include increased fuel and traffic efficiency, safety and driver comfort. There are many variations of the details of the concept such as: the goals of platooning, how it is implemented, mix of vehicles, the requirements on infrastructure, what is automated (longitudinal and lateral control) and to what level. The following projects are presented: SARTRE – a European platooning project; PATH – a California traffic automation program that includes platooning; GCDC – a cooperative driving initiative, SCANIA platooning and; Energy ITS – a Japanese truck platooning project.

  • 173.
    Berger, Cyrille
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, The Institute of Technology.
    Colour perception graph for characters segmentation2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Ryan McMahan, Jason Jerald, Hui Zhang, Steven M. Drucker, Chandra Kambhamettu, Maha El Choubassi, Zhigang Deng, Mark Carlson, Springer, 2014, p. 598-608Conference paper (Refereed)
    Abstract [en]

    Characters recognition in natural images is a challenging problem, asit involves segmenting characters of various colours on various background. Inthis article, we present a method for segmenting images that use a colour percep-tion graph. Our algorithm is inspired by graph cut segmentation techniques andit use an edge detection technique for filtering the graph before the graph-cut aswell as merging segments as a final step. We also present both qualitative andquantitative results, which show that our algorithm perform at slightly better andfaster to a state of the art algorithm.

  • 174.
    Berger, Cyrille
    Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS), l'Université Toulouse, France.
    Perception de la géométrie de l'environment pour la navigation autonome2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The goal of the mobile robotic research is to give robots the capability to accomplish missions in an environment that might be unknown. To accomplish his mission, the robot need to execute a given set of elementary actions (movement, manipulation of objects...) which require an accurate localisation of the robot, as well as a the construction of good geometric model of the environment. Thus, a robot will need to take the most out of his own sensors, of external sensors, of information coming from an other robot and of existing model coming from a Geographic Information System. The common information is the geometry of the environment. The first part of the presentation will be about the different methods to extract geometric information. The second part will be about the creation of the geometric model using a graph structure, along with a method to retrieve information in the graph to allow the robot to localise itself in the environment.

  • 175.
    Berger, Cyrille
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, The Institute of Technology.
    Strokes detection for skeletonisation of characters shapes2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part II / [ed] George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Ryan McMahan, Jason Jerald, Hui Zhang, Steven M. Drucker, Chandra Kambhamettu, Maha El Choubassi, Zhigang Deng, Mark Carlson, Springer, 2014, p. 510-520Conference paper (Refereed)
    Abstract [en]

    Skeletonisation is a key process in character recognition in natural images. Under the assumption that a character is made of a stroke of uniform colour, with small variation in thickness, the process of recognising characters can be decomposed in the three steps. First the image is segmented, then each segment is transformed into a set of connected strokes (skeletonisation), which are then abstracted in a descriptor that can be used to recognise the character. The main issue with skeletonisation is the sensitivity with noise, and especially, the presence of holes in the masks. In this article, a new method for the extraction of strokes is presented, which address the problem of holes in the mask and does not use any parameters.

  • 176.
    Berger, Cyrille
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Toward rich geometric map for SLAM: Online Detection of Planes in 2D LIDAR2012In: Proceedings of the International Workshop on Perception for Mobile Robots Autonomy (PEMRA), 2012Conference paper (Refereed)
    Abstract [en]

    Rich geometric models of the environment are needed for robots to accomplish their missions. However a robot operatingin a large environment would require a compact representation.

    In this article, we present a method that relies on the idea that a plane appears as a line segment in a 2D scan, andthat by tracking those lines frame after frame, it is possible to estimate the parameters of that plane. The method istherefore divided in three steps: fitting line segments on the points of the 2D scan, tracking those line segments inconsecutive scan and estimating the parameters with a graph based SLAM (Simultaneous Localisation And Mapping)algorithm.

  • 177.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab. Linköping University, The Institute of Technology.
    Lacroix, Simon
    LAAS.
    DSeg: Détection directe de segments dans une image2010In: 17ème congrès francophone AFRIF-AFIA Reconnaissance des Formes et Intelligence Artificielle (RFIA), 2010Conference paper (Refereed)
    Abstract [en]

    This paper presents a model-driven approach todetect image line segments. The approach incrementally detects segments on thegradient image using a linear Kalman filter that estimates the supporting lineparameters and their associated variances. The algorithms are fast and robustwith respect to image noise and illumination variations, they allow thedetection of longer line segments than data-driven approaches, and do notrequire any tedious parameters tuning. Results with varying scene illuminationand comparisons to classic existing approaches are presented.

  • 178.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Lacroix, Simon
    LAAS.
    Modélisation de l'environnement par facettes planes pour la Cartographie et la Localisation Simultanées par stéréovision2008In: Reconnaissance des Formes et Intelligence Artificielle (RFIA), 2008Conference paper (Refereed)
  • 179.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, KPLAB - Knowledge Processing Lab.
    Lacroix, Simon
    LAAS/CNRS, Univ. of Toulouse, Toulouse, France.
    Using planar facets for stereovision SLAM2008In: Proceedings of the IEEE/RSJ International Conference on Intelligent RObots and Systems (IROS), IEEE conference proceedings, 2008, p. 1606-1611Conference paper (Refereed)
    Abstract [en]

    In the context of stereovision SLAM, we propose a way to enrich the landmark models. Vision-based SLAM approaches usually rely on interest points associated to a point in the Cartesian space: by adjoining oriented planar patches (if they are present in the environment), we augment the landmark description with an oriented frame. Thanks to this additional information, the robot pose is fully observable with the perception of a single landmark, and the knowledge of the patches orientation helps the matching of landmarks. The paper depicts the chosen landmark model, the way to extract and match them, and presents some SLAM results obtained with such landmarks.

  • 180.
    Berger, Cyrille
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Rudol, Piotr
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Wzorek, Mariusz
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Kleiner, Alexander
    iRobot, Pasadena, CA, USA.
    Evaluation of Reactive Obstacle Avoidance Algorithms for a Quadcopter2016In: Proceedings of the 14th International Conference on Control, Automation, Robotics and Vision 2016 (ICARCV), IEEE conference proceedings, 2016, article id Tu31.3Conference paper (Refereed)
    Abstract [en]

    In this work we are investigating reactive avoidance techniques which can be used on board of a small quadcopter and which do not require absolute localisation. We propose a local map representation which can be updated with proprioceptive sensors. The local map is centred around the robot and uses spherical coordinates to represent a point cloud. The local map is updated using a depth sensor, the Inertial Measurement Unit and a registration algorithm. We propose an extension of the Dynamic Window Approach to compute a velocity vector based on the current local map. We propose to use an OctoMap structure to compute a 2-pass A* which provide a path which is converted to a velocity vector. Both approaches are reactive as they only make use of local information. The algorithms were evaluated in a simulator which offers a realistic environment, both in terms of control and sensors. The results obtained were also validated by running the algorithms on a real platform.

  • 181.
    Bergholm, Fredrik
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    The Plenoscope Concept and Image Formation2002In: Proceedings of SSAB 2002, 2002, p. 75-78Conference paper (Other scientific)
  • 182.
    Bergholm, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Adler, Jeremy
    Parmryd, Ingela
    Analysis of Bias in the Apparent Correlation Coefficient Between Image Pairs Corrupted by Severe Noise2010In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 37, no 3, p. 204-219Article in journal (Refereed)
    Abstract [en]

    The correlation coefficient r is a measure of similarity used to compare regions of interest in image pairs. In fluorescence microscopy there is a basic tradeoff between the degree of image noise and the frequency with which images can be acquired and therefore the ability to follow dynamic events. The correlation coefficient r is commonly used in fluorescence microscopy for colocalization measurements, when the relative distributions of two fluorophores are of interest. Unfortunately, r is known to be biased understating the true correlation when noise is present. A better measure of correlation is needed. This article analyses the expected value of r and comes up with a procedure for evaluating the bias of r, expected value formulas. A Taylor series of so-called invariant factors is analyzed in detail. These formulas indicate ways to correct r and thereby obtain a corrected value free from the influence of noise that is on average accurate (unbiased). One possible correction is the attenuated corrected correlation coefficient R, introduced heuristically by Spearman (in Am. J. Psychol. 15:72-101, 1904). An ideal correction formula in terms of expected values is derived. For large samples R tends towards the ideal correction formula and the true noise-free correlation. Correlation measurements using simulation based on the types of noise found in fluorescence microscopy images illustrate both the power of the method and the variance of R. We conclude that the correction formula is valid and is particularly useful for making correct analyses from very noisy datasets.

  • 183.
    Berglund, Christian
    et al.
    Växjö University, Faculty of Humanities and Social Sciences, School of Education.
    Hjelm, Anna
    Växjö University, Faculty of Humanities and Social Sciences, School of Education.
    Integrering av elevers visuella intressen i skolundervisningen2008Student thesis
  • 184.
    Bergman, Lars
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Intelligent Monitoring of the Offset Printing Process2004In: Proceedings of the IASTED International Conference on Neural Networks and Computational Intelligence, ACTA Press, 2004, p. 173-178Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a neural networks and image analysis based approach to assessing colour deviations in an offset printing process from direct measurements on halftone multicoloured pictures--there are no measuring areas printed solely to assess the deviations. A committee of neural networks is trained to assess the ink proportions in a small image area. From only one measurement the trained committee is capable of estimating the actual amount of printing inks dispersed on paper in the measuring area. To match the measured image area of the printed picture with the corresponding area of the original image, when comparing the actual ink proportions with the targeted ones, properties of the 2-D Fourier transform are exploited.

  • 185.
    Bergman, Lars
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Verikas, Antanas
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bacauskiene, M.
    Department of Applied Electronics, Kaunas University of Technology.
    Unsupervised colour image segmentation applied to printing quality assessment2005In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 23, no 4, p. 417-425Article in journal (Refereed)
    Abstract [en]

    We present an option for colour image segmentation applied to printing quality assessment in offset lithographic printing by measuring an average ink dot size in halftone pictures. The segmentation is accomplished in two stages through classification of image pixels. In the first stage, rough image segmentation is performed. The results of the first segmentation stage are then utilized to collect a balanced training data set for learning refined parameters of the decision rules. The developed software is successfully used in a printing shop to assess the ink dot size on paper and printing plates.

  • 186.
    Bergnéhr, Leo
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology.
    Segmentation and Alignment of 3-D Transaxial Myocardial Perfusion Images and Automatic Dopamin Transporter Quantification2008Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nuclear medical imaging such as SPECT (Single Photon Emission Tomography) is an imaging modality which is readily used in many applications for measuring physiological properties of the human body. One very common type of examination using SPECT is when measuring myocardial perfusion (blood flow in the heart tissue), which is often used to examine e.g. a possible myocardial infarction (heart attack). In order for doctors to give a qualitative diagnose based on these images, the images must first be segmented and rotated by a medical technologist. This is performed due to the fact that the heart of different patients, or for patients at different times of examination, is not situated and rotated equally, which is an essential assumption for the doctor when examining the images. Consequently, as different technologists with different amount of experience and expertise will rotate images differently, variability between operators arises and can often become a problem in the process of diagnosing.

    Another type of nuclear medical examination is when quantifying dopamine transporters in the basal ganglia in the brain. This is commonly done for patients showing symptoms of Parkinson’s disease or similar diseases. In order to specify the severity of the disease, a scheme for calculating different fractions between parts of the dopamine transporter area is often used. This is tedious work for the person performing the quantification, and despite the acquired three dimensional images, quantification is too often performed on one or more slices of the image volume. In resemblance with myocardial perfusion examinations, variability between different operators can also here present a possible source of errors.

    In this thesis, a novel method for automatically segmenting the left ventricle of the heart in SPECT-images is presented. The segmentation is based on an intensity-invariant local-phase based approach, thus removing the difficulty of the commonly varying intensity in myocardial perfusion images. Additionally, the method is used to estimate the angle of the left ventricle of the heart. Furthermore, the method is slightly adjusted, and a new approach on automatically quantifying dopamine transporters in the basal ganglia using the DaTSCAN radiotracer is proposed.

  • 187.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Interactive Perception: From Scenes to Objects2012Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis builds on the observation that robots, like humans, do not have enough experience to handle all situations from the start. Therefore they need tools to cope with new situations, unknown scenes and unknown objects. In particular, this thesis addresses objects. How can a robot realize what objects are if it looks at a scene and has no knowledge about objects? How can it recover from situations where its hypotheses about what it sees are wrong? Even if it has built up experience in form of learned objects, there will be situations where it will be uncertain or mistaken, and will therefore still need the ability to correct errors. Much of our daily lives involves interactions with objects, and the same will be true robots existing among us. Apart from being able to identify individual objects, the robot will therefore need to manipulate them.

    Throughout the thesis, different aspects of how to deal with these questions is addressed. The focus is on the problem of a robot automatically partitioning a scene into its constituting objects. It is assumed that the robot does not know about specific objects, and is therefore considered inexperienced. Instead a method is proposed that generates object hypotheses given visual input, and then enables the robot to recover from erroneous hypotheses. This is done by the robot drawing from a human's experience, as well as by enabling it to interact with the scene itself and monitoring if the observed changes are in line with its current beliefs about the scene's structure.

    Furthermore, the task of object manipulation for unknown objects is explored. This is also used as a motivation why the scene partitioning problem is essential to solve. Finally aspects of monitoring the outcome of a manipulation is investigated by observing the evolution of flexible objects in both static and dynamic scenes. All methods that were developed for this thesis have been tested and evaluated on real robotic platforms. These evaluations show the importance of having a system capable of recovering from errors and that the robot can take advantage of human experience using just simple commands.

  • 188.
    Bergström, Niklas
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Roberson-Johnson, Matthew
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kootstra, Gert
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active Scene Analysis2010Conference paper (Refereed)
  • 189.
    Bergström, Niklas
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Yamakawa, Yuji
    Senoo, Taku
    Ishikawa, Masatoshi
    On-line learning of temporal state models for flexible objects2012In: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2012, p. 712-718Conference paper (Refereed)
    Abstract [en]

    State estimation and control are intimately related processes in robot handling of flexible and articulated objects. While for rigid objects, we can generate a CAD model before-hand and a state estimation boils down to estimation of pose or velocity of the object, in case of flexible and articulated objects, such as a cloth, the representation of the object's state is heavily dependent on the task and execution. For example, when folding a cloth, the representation will mainly depend on the way the folding is executed.

  • 190.
    Bergström, Niklas
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Yamakawa, Yuji
    Tokyo University.
    Senoo, Taku
    Tokyo University.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ishikawa, Masatoshi
    Tokyo University.
    State Recognition of Deformable Objects Using Shape Context2011In: The 29th Annual Conference of the Robotics Society of Japan, 2011Conference paper (Other academic)
  • 191. Bernander, Karl B.
    et al.
    Gustavsson, Kenneth
    Selig, Bettina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Sintorn, Ida-Maria
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Luengo Hendriks, Cris L.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Improving the stochastic watershed2013In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 34, no 9, p. 993-1000Article in journal (Refereed)
    Abstract [en]

    The stochastic watershed is an unsupervised segmentation tool recently proposed by Angulo and Jeulin. By repeated application of the seeded watershed with randomly placed markers, a probability density function for object boundaries is created. In a second step, the algorithm then generates a meaningful segmentation of the image using this probability density function. The method performs best when the image contains regions of similar size, since it tends to break up larger regions and merge smaller ones. We propose two simple modifications that greatly improve the properties of the stochastic watershed: (1) add noise to the input image at every iteration, and (2) distribute the markers using a randomly placed grid. The noise strength is a new parameter to be set, but the output of the algorithm is not very sensitive to this value. In return, the output becomes less sensitive to the two parameters of the standard algorithm. The improved algorithm does not break up larger regions, effectively making the algorithm useful for a larger class of segmentation problems.

  • 192. Berrada, Dounia
    et al.
    Romero, Mario
    Georgia Institute of Technology, US.
    Abowd, Gregory
    Blount, Marion
    Davis, John
    Automatic Administration of the Get Up and Go Test2007In: HealthNet'07: Proceedings of the 1st ACM SIGMOBILE International Workshop on Systems and Networking Support for Healthcare and Assisted Living Environments, ACM Digital Library, 2007, p. 73-75Conference paper (Refereed)
    Abstract [en]

    In-home monitoring using sensors has the potential to improve the life of elderly and chronically ill persons, assist their family and friends in supervising their status, and provide early warning signs to the person's clinicians. The Get Up and Go test is a clinical test used to assess the balance and gait of a patient. We propose a way to automatically apply an abbreviated version of this test to patients in their residence using video data without body-worn sensors or markers.

  • 193.
    Bevilacqua, Fernando
    et al.
    Federal University of Fronteira Sul, Chapecó, Brazil.
    Backlund, Per
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Engström, Henrik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proposal for Non-contact Analysis of Multimodal Inputs to Measure Stress Level in Serious Games2015In: VS-Games 2015: 7th International Conference on Games and Virtual Worlds for Serious Applications / [ed] Per Backlund, Henrik Engström & Fotis Liarokapis, Red Hook, NY: IEEE Computer Society, 2015, p. 171-174Conference paper (Refereed)
    Abstract [en]

    The process of monitoring user emotions in serious games or human-computer interaction is usually obtrusive. The work-flow is typically based on sensors that are physically attached to the user. Sometimes those sensors completely disturb the user experience, such as finger sensors that prevent the use of keyboard/mouse. This short paper presents techniques used to remotely measure different signals produced by a person, e.g. heart rate, through the use of a camera and computer vision techniques. The analysis of a combination of such signals (multimodal input) can be used in a variety of applications such as emotion assessment and measurement of cognitive stress. We present a research proposal for measurement of player’s stress level based on a non-contact analysis of multimodal user inputs. Our main contribution is a survey of commonly used methods to remotely measure user input signals related to stress assessment.

  • 194.
    Bhatt, Mehul
    et al.
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems2009In: International Journal of Robotics and Automation, ISSN 0826-8185, Vol. 24, no 3, p. 235-244Article in journal (Refereed)
    Abstract [en]

    Ambient intelligence environments necessitate representing and reasoning about dynamic spatial scenes and configurations. The ability to perform predictive and explanatory analyses of spatial scenes is crucial towards serving a useful intelligent function within such environments. We present a formal qualitative model that combines existing qualitative theories about space with it formal logic-based calculus suited to modelling dynamic environments, or reasoning about action and change in general. With this approach, it is possible to represent and reason about arbitrary dynamic spatial environments within a unified framework. We clarify and elaborate on our ideas with examples grounded in a smart environment.

  • 195.
    Bhatt, Mehul
    et al.
    Department of Computer Science, La Trobe University, Germany.
    Loke, Seng
    Department of Computer Science, La Trobe University, Germany.
    Modelling Dynamic Spatial Systems in the Situation Calculus2008In: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252, Vol. 8, no 1-2, p. 86-130Article in journal (Refereed)
    Abstract [en]

    We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of "qualitative spatial calculi" that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of the key fundamental epistemological issues concerning the frame and the ramification problems that arise whilst modelling change within such domains. The main advantage of the proposed approach is that based on the structure and semantics of the proposed framework, fundamental reasoning tasks such as projection and explanation directly follow. Within the specialised spatial reasoning domain, these translate to spatial planning/re-configuration, causal explanation and spatial simulation. Our approach is based on the hypothesis that alternate formalisations of existing qualitative spatial calculi using high-level tools such as the situation calculus are essential for their utilisation in diverse application domains such as intelligent systems, cognitive robotics and event-based GIS.

  • 196. Bi, Yin
    et al.
    Lv, Mingsong
    Wei, Yangjie
    Guan, Nan
    Yi, Wang
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computer Systems.
    Multi-feature fusion for thermal face recognition2016In: Infrared physics & technology, ISSN 1350-4495, E-ISSN 1879-0275, Vol. 77, p. 366-374Article in journal (Refereed)
  • 197.
    Biedermann, Daniel
    et al.
    Goethe University, Germany.
    Ochs, Matthias
    Goethe University, Germany.
    Mester, Rudolf
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Goethe University, Germany.
    Evaluating visual ADAS components on the COnGRATS dataset2016In: 2016 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), IEEE , 2016, p. 986-991Conference paper (Refereed)
    Abstract [en]

    We present a framework that supports the development and evaluation of vision algorithms in the context of driver assistance applications and traffic surveillance. This framework allows the creation of highly realistic image sequences featuring traffic scenarios. The sequences are created with a realistic state of the art vehicle physics model; different kinds of environments are featured, thus providing a wide range of testing scenarios. Due to the physically-based rendering technique and variable camera models employed for the image rendering process, we can simulate different sensor setups and provide appropriate and fully accurate ground truth data.

  • 198.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Gustavsson, TomasChalmers University of Technology, Department of Signals and Systems, Gothenburg, Sweden.
    Image analysis: 13th Scandinavian Conference, SCIA 2003, Halmstad, Sweden, June 29-July 2, 2003, Proceedings2003Conference proceedings (editor) (Other academic)
    Abstract [en]

    This book constitutes the refeered proceedings of the 13th Scandinavian Conference on Image Analysis, SCIA 2003, held in Halmstad, Sweden in June/July 2003.The 148 revised full papers presented together with 6 invited contributions were carefully reviewed and selected for presentation. The papers are organized in topical sections on feature extraction, depth and surface, shape analysis, coding and representation, motion analysis, medical image processing, color analysis, texture analysis, indexing and categorization, and segmentation and spatial grouping.

  • 199.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Malmqvist, Kerstin
    Proceedings: Symposium on image analysis, Halmstad March 7-8, 20002000Conference proceedings (editor) (Other academic)
  • 200.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Verikas, AntanasHalmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Proceedings SSBA '09: Symposium on Image Analysis, Halmstad University, Halmstad, March 18-20, 20092009Conference proceedings (editor) (Other academic)
1234567 151 - 200 of 1726
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf