Change search
Refine search result
1234567 1 - 50 of 1473
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Oldest first
  • Newest first
Select all
  • 1. Abela, D
    et al.
    Ritchie, H
    Ababneh, D
    Gavin, C
    Nilsson, Mats F
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Pharmacy, Department of Pharmaceutical Biosciences.
    Niazi, M Khalid Khan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Carlsson, K
    Webster, WS
    The effect of drugs with ion channel-blocking activity on the early embryonic rat heart2010In: Birth defects research. Part B. Developmental and reproductice toxicology, ISSN 1542-9733, E-ISSN 1542-9741, Vol. 89, no 5, 429-440 p.Article in journal (Refereed)
    Abstract [en]

    This study investigated the effects of a range of pharmaceutical drugs with ion channel-blocking activity on the heart of gestation day 13 rat embryos in vitro. The general hypothesis was that the blockade of the IKr/hERG channel, that is highly important for the normal functioning of the embryonic rat heart, would cause bradycardia and arrhythmia. Concomitant blockade of other channels was expected to modify the effects of hERG blockade. Fourteen drugs with varying degrees of specificity and affinity toward potassium, sodium, and calcium channels were tested over a range of concentrations. The rat embryos were maintained for 2 hr in culture, 1 hr to acclimatize, and 1 hr to test the effect of the drug. All the drugs caused a concentration-dependent bradycardia except nifedipine, which primarily caused a negative inotropic effect eventually stopping the heart. A number of drugs induced arrhythmias and these appeared to be related to either sodium channel blockade, which resulted in a double atrial beat for each ventricular beat, or IKr/hERG blockade, which caused irregular atrial and ventricular beats. However, it is difficult to make a precise prediction of the effect of a drug on the embryonic heart just by looking at the polypharmacological action on ion channels. The results indicate that the use of the tested drugs during pregnancy could potentially damage the embryo by causing periods of hypoxia. In general, the effects on the embryonic heart were only seen at concentrations greater than those likely to occur with normal therapeutic dosing.

  • 2. Abeywardena, D.
    et al.
    Wang, Zhan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dissanayake, G.
    Waslander, S. L.
    Kodagoda, S.
    Model-aided state estimation for quadrotor micro air vehicles amidst wind disturbances2014Conference paper (Refereed)
    Abstract [en]

    This paper extends the recently developed Model-Aided Visual-Inertial Fusion (MA-VIF) technique for quadrotor Micro Air Vehicles (MAV) to deal with wind disturbances. The wind effects are explicitly modelled in the quadrotor dynamic equations excluding the unobservable wind velocity component. This is achieved by a nonlinear observability of the dynamic system with wind effects. We show that using the developed model, the vehicle pose and two components of the wind velocity vector can be simultaneously estimated with a monocular camera and an inertial measurement unit. We also show that the MA-VIF is reasonably tolerant to wind disturbances, even without explicit modelling of wind effects and explain the reasons for this behaviour. Experimental results using a Vicon motion capture system are presented to demonstrate the effectiveness of the proposed method and validate our claims.

  • 3.
    Abrate, Matteo
    et al.
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Bacciu, Clara
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Hast, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Marchetti, Andrea
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Minutoli, Salvatore
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Tesconi, Maurizio
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Geomemories - A Platform for Visualizing Historical, Environmental and Geospatial Changes of the Italian Landscape2013In: ISPRS International Journal of Geo-Information. Special issue: Geospatial Monitoring and Modelling of Environmental Change, ISSN 2220-9964, Vol. 2, no 2, 432-455 p.Article in journal (Refereed)
    Abstract [en]

    The GeoMemories project aims at publishing on the Web and digitally preserving historical aerial photographs that are currently stored in physical form within the archives of the Aerofototeca Nazionale in Rome. We describe a system, available at http://www.geomemories.org, that lets users visualize the evolution of the Italian landscape throughout the last century. The Web portal allows comparison of recent satellite imagery with several layers of historical maps, obtained from the aerial photos through a complex workflow that merges them together. We present several case studies carried out in collaboration with geologists, historians and archaeologists, that illustrate the great potential of our system in different research fields. Experiments and advances in image processing technologies are envisaged as a key factor in solving the inherent issue of vast amounts of manual work, from georeferencing to mosaicking to analysis.

  • 4. Adinugroho, Sigit
    et al.
    Vallot, Dorothée
    Uppsala University, Disciplinary Domain of Science and Technology, Earth Sciences, Department of Earth Sciences, LUVAL.
    Westrin, Pontus
    Uppsala University, Disciplinary Domain of Science and Technology, Earth Sciences, Department of Earth Sciences.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Calving events detection and quantification from time-lapse images in Tunabreen glacier2015In: Proc. 9th International Conference on Information & Communication Technology and Systems, Piscataway, NJ: IEEE , 2015, 61-65 p.Conference paper (Refereed)
  • 5.
    Aghazadeh, Omid
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Data Driven Visual Recognition2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is mostly about supervised visual recognition problems. Based on a general definition of categories, the contents are divided into two parts: one which models categories and one which is not category based. We are interested in data driven solutions for both kinds of problems.

    In the category-free part, we study novelty detection in temporal and spatial domains as a category-free recognition problem. Using data driven models, we demonstrate that based on a few reference exemplars, our methods are able to detect novelties in ego-motions of people, and changes in the static environments surrounding them.

    In the category level part, we study object recognition. We consider both object category classification and localization, and propose scalable data driven approaches for both problems. A mixture of parametric classifiers, initialized with a sophisticated clustering of the training data, is demonstrated to adapt to the data better than various baselines such as the same model initialized with less subtly designed procedures. A nonparametric large margin classifier is introduced and demonstrated to have a multitude of advantages in comparison to its competitors: better training and testing time costs, the ability to make use of indefinite/invariant and deformable similarity measures, and adaptive complexity are the main features of the proposed model.

    We also propose a rather realistic model of recognition problems, which quantifies the interplay between representations, classifiers, and recognition performances. Based on data-describing measures which are aggregates of pairwise similarities of the training data, our model characterizes and describes the distributions of training exemplars. The measures are shown to capture many aspects of the difficulty of categorization problems and correlate significantly to the observed recognition performances. Utilizing these measures, the model predicts the performance of particular classifiers on distributions similar to the training data. These predictions, when compared to the test performance of the classifiers on the test sets, are reasonably accurate.

    We discuss various aspects of visual recognition problems: what is the interplay between representations and classification tasks, how can different models better adapt to the training data, etc. We describe and analyze the aforementioned methods that are designed to tackle different visual recognition problems, but share one common characteristic: being data driven.

  • 6.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Azizpour, Hossein
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mixture component identification and learning for visual recognition2012In: Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI, Springer, 2012, 115-128 p.Conference paper (Refereed)
    Abstract [en]

    The non-linear decision boundary between object and background classes - due to large intra-class variations - needs to be modelled by any classifier wishing to achieve good results. While a mixture of linear classifiers is capable of modelling this non-linearity, learning this mixture from weakly annotated data is non-trivial and is the paper's focus. Our approach is to identify the modes in the distribution of our positive examples by clustering, and to utilize this clustering in a latent SVM formulation to learn the mixture model. The clustering relies on a robust measure of visual similarity which suppresses uninformative clutter by using a novel representation based on the exemplar SVM. This subtle clustering of the data leads to learning better mixture models, as is demonstrated via extensive evaluations on Pascal VOC 2007. The final classifier, using a HOG representation of the global image patch, achieves performance comparable to the state-of-the-art while being more efficient at detection time.

  • 7.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Properties of Datasets Predict the Performance of Classifiers2013Manuscript (preprint) (Other academic)
  • 8.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Large Scale, Large Margin Classification using Indefinite Similarity MeasurensManuscript (preprint) (Other academic)
  • 9.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Properties of Datasets Predict the Performance of Classifiers2013In: BMVC 2013 - Electronic Proceedings of the British Machine Vision Conference 2013, British Machine Vision Association, BMVA , 2013Conference paper (Refereed)
    Abstract [en]

    It has been shown that the performance of classifiers depends not only on the number of training samples, but also on the quality of the training set [10, 12]. The purpose of this paper is to 1) provide quantitative measures that determine the quality of the training set and 2) provide the relation between the test performance and the proposed measures. The measures are derived from pairwise affinities between training exemplars of the positive class and they have a generative nature. We show that the performance of the state of the art methods, on the test set, can be reasonably predicted based on the values of the proposed measures on the training set. These measures open up a wide range of applications to the recognition community enabling us to analyze the behavior of the learning algorithms w.r.t the properties of the training data. This will in turn enable us to devise rules for the automatic selection of training data that maximize the quantified quality of the training set and thereby improve recognition performance.

  • 10.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Multi view registration for novelty/background separation2012In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE Computer Society, 2012, 757-764 p.Conference paper (Refereed)
    Abstract [en]

    We propose a system for the automatic segmentation of novelties from the background in scenarios where multiple images of the same environment are available e.g. obtained by wearable visual cameras. Our method finds the pixels in a query image corresponding to the underlying background environment by comparing it to reference images of the same scene. This is achieved despite the fact that all the images may have different viewpoints, significantly different illumination conditions and contain different objects cars, people, bicycles, etc. occluding the background. We estimate the probability of each pixel, in the query image, belonging to the background by computing its appearance inconsistency to the multiple reference images. We then, produce multiple segmentations of the query image using an iterated graph cuts algorithm, initializing from these estimated probabilities and consecutively combine these segmentations to come up with a final segmentation of the background. Detection of the background in turn highlights the novel pixels. We demonstrate the effectiveness of our approach on a challenging outdoors data set.

  • 11.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arsic, Dejan
    Munich University of Technology, Germany.
    Ganchev, Todor
    University of Patras, Greece.
    Linderhed, Anna
    FOI Swedish Defence Research Agency.
    Menezes, Paolo
    University of Coimbra, Portugal.
    Ntalampiras, Stavros
    University of Patras, Greece.
    Olma, Tadeusz
    MARAC S.A., Greece.
    Potamitis, Ilyas
    Technological Educational Institute of Crete, Greece.
    Ros, Julien
    Probayes SAS, France.
    Prometheus: Prediction and interpretation of human behaviour based on probabilistic structures and heterogeneous sensors2008Conference paper (Refereed)
    Abstract [en]

    The on-going EU funded project Prometheus (FP7-214901) aims at establishing a general framework which links fundamental sensing tasks to automated cognition processes enabling interpretation and short-term prediction of individual and collective human behaviours in unrestricted environments as well as complex human interactions. To achieve the aforementioned goals, the Prometheus consortium works on the following core scientific and technological objectives:

    1. sensor modeling and information fusion from multiple, heterogeneous perceptual modalities;

    2. modeling, localization, and tracking of multiple people;

    3. modeling, recognition, and short-term prediction of continuous complex human behavior.

  • 12.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Evaluating Template Rescaling in Short-Term Single-Object Tracking2015Conference paper (Refereed)
    Abstract [en]

    In recent years, short-term single-object tracking has emerged has a popular research topic, as it constitutes the core of more general tracking systems. Many such tracking methods are based on matching a part of the image with a template that is learnt online and represented by, for example, a correlation filter or a distribution field. In order for such a tracker to be able to not only find the position, but also the scale, of the tracked object in the next frame, some kind of scale estimation step is needed. This step is sometimes separate from the position estimation step, but is nevertheless jointly evaluated in de facto benchmarks. However, for practical as well as scientific reasons, the scale estimation step should be evaluated separately – for example,theremightincertainsituationsbeothermethodsmore suitable for the task. In this paper, we describe an evaluation method for scale estimation in template-based short-term single-object tracking, and evaluate two state-of-the-art tracking methods where estimation of scale and position are separable.

  • 13.
    Ahlberg, Jörgen
    et al.
    Dept. of IR Systems, Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Computer Vision Center, Universitat Autonoma de Barcelona, Bellaterra, Spain.
    Parametric Face Modeling and Tracking2005In: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, Springer-Verlag New York, 2005, 65-87 p.Chapter in book (Other academic)
  • 14.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Efficient active appearance model for real-time head and facial feature tracking2003In: Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, IEEE conference proceedings, 2003, 173-180 p.Conference paper (Refereed)
    Abstract [en]

    We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.

  • 15.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology. Div. of Sensor Technology, Swedish Defence Research Agency, Linköping, Sweden.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Face tracking for model-based coding and face animation2003In: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 13, no 1, 8-22 p.Article in journal (Refereed)
    Abstract [en]

    We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.

  • 16.
    Ahlberg, Jörgen
    et al.
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Klasén, Lena
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Surveillance Systems for Urban Crisis Management2005Conference paper (Other academic)
    Abstract [en]

    We present a concept for combing 3D models and multiple heterogeneous sensors into a surveillance system enabling superior situation awareness. The concept has many military as well as civilian applications. A key issue is the use of a 3D environment model of the area to be surveyed, typically an urban area. In addition to the 3D model, the area of interest is monitored over time using multiple heterogeneous sensors, such as optical, acoustic, and/or seismic sensors. Data and analysis results from the sensors are visualized in the 3D model, thus putting them in a common reference frame and making their spatial and temporal relations obvious. The result is highlighted by an example where data from different sensor systems is integrated in a 3D model of a Swedish urban area.

  • 17.
    Ahlberg, Jörgen
    et al.
    Termisk Systemteknik AB Linköping, Sweden; Visage Technologies AB Linköping, Sweden.
    Markuš, Nenad
    Human-Oriented Technologies Laboratory, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia.
    Berg, Amanda
    Termisk Systemteknik AB, Linköping, Sweden.
    Multi-person fever screening using a thermal and a visual camera2015Conference paper (Other academic)
    Abstract [en]

    We propose a system to automatically measure the body temperature of persons as they pass. In contrast to exisitng systems, the persons do not need to stop and look into a camera one-by-one. Instead, their eye corners are automatically detected and the temperatures therein measured using a thermal camera. The system handles multiple simultaneous persons and can thus be used where a flow of people pass, such as at airport gates.

  • 18.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Glana Sensors AB, Sweden.
    Renhorn, Ingmar
    Glana Sensors AB, Sweden.
    Chevalier, Tomas
    Scienvisic AB, Sweden.
    Rydell, Joakim
    FOI, Swedish Defence Research Agency, Sweden.
    Bergström, David
    FOI, Swedish Defence Research Agency, Sweden.
    Three-dimensional hyperspectral imaging technique2017In: / [ed] Miguel Velez-Reyes; David W. Messinger, 2017, Vol. 10198, 1019805Conference paper (Refereed)
    Abstract [en]

    Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.

  • 19.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar G.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Wadströmer, Niclas
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information measure of sensor performance and its relation to the ROC curve2010In: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen; Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, Art.nr. 7695-72- p.Conference paper (Refereed)
    Abstract [en]

    The ROC curve is the most frequently used performance measure for detection methods and the underlying sensor configuration. Common problems are that the ROC curve does not present a single number that can be compared to other systems and that no discrimination between sensor performance and algorithm performance is done. To address the first problem, a number of measures are used in practice, like detection rate at a specific false alarm rate, or area-under-curve. For the second problem, we proposed in a previous paper1 an information theoretic method for measuring sensor performance. We now relate the method to the ROC curve, show that it is equivalent to selecting a certain point on the ROC curve, and that this point is easily determined. Our scope is hyperspectral data, studying discrimination between single pixels.

  • 20.
    Ahlman, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Improved Temporal Resolution Using Parallel Imaging in Radial-Cartesian 3D functional MRI2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    MRI (Magnetic Resonance Imaging) is a medical imaging method that uses magnetic fields in order to retrieve images of the human body. This thesis revolves around a novel acquisition method of 3D fMRI (functional Magnetic Resonance Imaging) called PRESTO-CAN that uses a radial pattern in order to sample the (kx,kz)-plane of k-space (the frequency domain), and a Cartesian sample pattern in the ky-direction. The radial sample pattern allows for a denser sampling of the central parts of k-space, which contain the most basic frequency information about the structure of the recorded object. This allows for higher temporal resolution to be achieved compared with other sampling methods since a fewer amount of total samples are needed in order to retrieve enough information about how the object has changed over time. Since fMRI is mainly used for monitoring blood flow in the brain, increased temporal resolution means that we can be able to track fast changes in brain activity more efficiently.The temporal resolution can be further improved by reducing the time needed for scanning, which in turn can be achieved by applying parallel imaging. One such parallel imaging method is SENSE (SENSitivity Encoding). The scan time is reduced by decreasing the sampling density, which causes aliasing in the recorded images. The aliasing is removed by the SENSE method by utilizing the extra information provided by the fact that multiple receiver coils with differing sensitivities are used during the acquisition. By measuring the sensitivities of the respective receiver coils and solving an equation system with the aliased images, it is possible to calculate how they would have looked like without aliasing.In this master thesis, SENSE has been successfully implemented in PRESTO-CAN. By using normalized convolution in order to refine the sensitivity maps of the receiver coils, images with satisfying quality was able to be reconstructed when reducing the k-space sample rate by a factor of 2, and images of relatively good quality also when the sample rate was reduced by a factor of 4. In this way, this thesis has been able to contribute to the improvement of the temporal resolution of the PRESTO-CAN method.

  • 21.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro University, School of Science and Technology, Örebro University, Sweden.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, no 3, 600-621 p.Article in journal (Refereed)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 22.
    Akan, Batu
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Planning and Sequencing Through Multimodal Interaction for Robot Programming2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Over the past few decades the use of industrial robots has increased the efficiency as well as the competitiveness of several sectors. Despite this fact, in many cases robot automation investments are considered to be technically challenging. In addition, for most small and medium-sized enterprises (SMEs) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods of industrial robots are too complex for most technicians or manufacturing engineers, and thus assistance from a robot programming expert is often needed. The hypothesis is that in order to make the use of industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis, a novel system for task-level programming is proposed. The user interacts with an industrial robot by giving instructions in a structured natural language and by selecting objects through an augmented reality interface. The proposed system consists of two parts: (i) a multimodal framework that provides a natural language interface for the user to interact in which the framework performs modality fusion and semantic analysis, (ii) a symbolic planner, POPStar, to create a time-efficient plan based on the user's instructions. The ultimate goal of this work in this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.This thesis mainly addresses two issues. The first issue is a general framework for designing and developing multimodal interfaces. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline. The framework also includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation. Such a framework helps us to make interaction with a robot easier and more natural. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high-level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than the programming issues of the robot. The second issue addressed is due to inherent characteristics of communication with the use of natural language; instructions given by a user are often vague and may require other actions to be taken before the conditions for applying the user's instructions are met. In order to solve this problem a symbolic planner, POPStar, based on a partial order planner (POP) is proposed. The system takes landmarks extracted from user instructions as input, and creates a sequence of actions to operate the robotic cell with minimal makespan. The proposed planner takes advantage of the partial order capabilities of POP to execute actions in parallel and employs a best-first search algorithm to seek the series of actions that lead to a minimal makespan. The proposed planner can also handle robots with multiple grippers, parallel machines as well as scheduling for multiple product types.

  • 23.
    Akin, H. Levent
    et al.
    Bogazici University, Turkey.
    Ito, Nobuhiro
    Aichi Institute of Technology, Japan.
    Jacoff, Adam
    National Institute of Standards, USA.
    Kleiner, Alexander
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Pellenz, Johannes
    V&R Vision & Robotics GmbH, Germany.
    Visser, Arnoud
    University of Amsterdam, Holland.
    RoboCup Rescue Robot and Simulation Leagues2013In: The AI Magazine, ISSN 0738-4602, Vol. 34, no 1Article in journal (Refereed)
    Abstract [en]

    The RoboCup Rescue Robot and Simulation competitions have been held since 2000. The experience gained during these competitions has increased the maturity level of the field, which allowed deploying robots after real disasters (e.g. Fukushima Daiichi nuclear disaster). This article provides an overview of these competitions and highlights the state of the art and the lessons learned.

  • 24.
    Alaa, Halawani
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Haibo, Li
    School of Computer Science & Communication, Royal Institute of Technology (KTH), Stockholm, Sweden.
    Template-based Search: A Tool for Scene Analysis2016In: 12th IEEE International Colloquium on Signal Processing & its Applications (CSPA): Proceeding, IEEE, 2016, 7515772Conference paper (Refereed)
    Abstract [en]

    This paper proposes a simple and yet effective technique for shape-based scene analysis, in which detection and/or tracking of specific objects or structures in the image is desirable. The idea is based on using predefined binary templates of the structures to be located in the image. The template is matched to contours in a given edge image to locate the designated entity. These templates are allowed to deform in order to deal with variations in the structure's shape and size. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust results in cluttered and noisy scenes in the applications presented.

  • 25.
    AliNazari, Mirian
    Umeå University, Faculty of Teacher Education, Department of Creative Studies.
    Kreativ Uppväxtmiljö: en studie av stadieteorier2007Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    I examensarbetet studerades bildutveckling som även jämförts med författarens egen uppväxtmiljö. Metoden har varit en litteraturstudie som behandlar ämnet estetiska uttrycksformer och kreativ uppväxt. Därtill har en granskning av författarens uppväxtmiljö gällande möjlighet till övande av kreativ förmåga tagits upp i relation till personlig utveckling. Jämförelse har gjorts med stadieteorier om utvecklande av barns bildanvändning. Genom dokumenterade av författarens egna bilder under tidiga år visades bildutveckling i de olika teckningsutvecklingsstadierna. Slutsatsen är att kreativ förmåga påverkas sannolikt av uppfostran fylld med möjligheten att få måla och teckna, något som bildlärare kan utveckla i arbetet med barn. Behov att som blivande lärare integrera bilden i de teoretiska ämnena kan utveckla dessa möjligheter ytterligare.

  • 26.
    Allalou, Amin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    van de Rijke, Frans M.
    Jahangir Tafrechi, Roos
    Raap, Anton K.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Image Based Measurements of Single Cell mtDNA Mutation Load2007In: Image Analysis, Proceedings / [ed] Ersboll BK, Pedersen KS, 2007, 631-640 p.Conference paper (Refereed)
    Abstract [en]

    Cell cultures as well as cells in tissue always display a certain degree of variability, and measurements based on cell averages will miss important information contained in a heterogeneous population. This paper presents automated methods for image based measurements of mitochondiral DNA (mtDNA) mutations in individual cells. The mitochondria are present in the cell’s cytoplasm, and each cytoplasm has to be delineated. Three different methods for segmentation of cytoplasms are compared and it is shown that automated cytoplasmic delineation can be performed 30 times faster than manual delineation, with an accuracy as high as 87%. The final image based measurements of mitochondrial mutation load are also compared to, and show high agreement with, measurements made using biochemical techniques.

  • 27.
    Allalou, Amin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Signal Detection in 3D by Stable Wave Signal VerificationIn:  Proceedings of SSBA 2009Conference paper (Other academic)
    Abstract [en]

    Detection and localization of point-source signals is an important task in many image analysis applications. These types of signals can commonly be seen in fluorescent microscopy when studying functions of biomolecules. Visual detection and localization of point-source signals in 3D is limited and time consuming, making automated methods an important task. The 3D Stable Wave Detector (3DSWD) is a new method that combines signal enhancement with a verifier/separator. The verifier/separator examines the intensity gradient around a signal, making the detection less sensitive to noise and better at separating spatially close signals. Conventional methods such as; TopHat, Difference of Gaussian, and Multiscale Product consist only of signal enhancement. In this paper we compare the 3DSWD to these conventional methods with and without the addition of a verifier/separator. We can see that the 3DSWD has the highest robustness to noise among all the methods and that the other methods are improved when a verifier/separator is added.

  • 28.
    Allalou, Amin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    BlobFinder, a tool for fluorescence microscopy image cytometry2009In: Computer Methods and Programs in Biomedicine, ISSN 0169-2607, E-ISSN 1872-7565, Vol. 94, no 1, 58-65 p.Article in journal (Refereed)
    Abstract [en]

    Images can be acquired at high rates with modern fluorescence microscopy hardware, giving rise to a demand for high-speed analysis of image data. Digital image cytometry, i.e., automated measurements and extraction of quantitative data from images of cells, provides valuable information for many types of biomedical analysis. There exists a number of different image analysis software packages that can be programmed to perform a wide array of useful measurements. However, the multi-application capability often compromises the simplicity of the tool. Also, the gain in speed of analysis is often compromised by time spent learning complicated software. We provide a free software called BlobFinder that is intended for a limited type of application, making it easy to use, easy to learn and optimized for its particular task. BlobFinder can perform batch processing of image data and quantify as well as localize cells and point like source signals in fluorescence microscopy images, e.g., from FISH, in situ PLA and padlock probing, in a fast and easy way.

  • 29. Almansa, A.
    et al.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Fingerprint enhancement by shape adaptation of scale-space operators with automatic scale selection2000In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 9, no 12, 2027-2042 p.Article in journal (Refereed)
    Abstract [en]

    This work presents two mechanisms for processing fingerprint images; shape-adapted smoothing based on second moment descriptors and automatic scale selection based on normalized derivatives. The shape adaptation procedure adapts the smoothing operation to the local ridge structures, which allows interrupted ridges to be joined without destroying essential singularities such as branching points and enforces continuity of their directional fields. The Scale selection procedure estimates local ridge width and adapts the amount of smoothing to the local amount of noise. In addition, a ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model, and is used for spreading the results of shape adaptation into noisy areas. The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. The result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a Smoothed grey-level version of the input image. We propose that these general techniques should be of interest to developers of automatic fingerprint identification systems as well as in other applications of processing related types of imagery.

  • 30. Almansa, Andrés
    et al.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Enhancement of Fingerprint Images by Shape-Adapted Scale-Space Operators1996In: Gaussian Scale-Space Theory. Part I: Proceedings of PhD School on Scale-Space Theory (Copenhagen, Denmark) May 1996 / [ed] J. Sporring, M. Nielsen, L. Florack, and P. Johansen, Springer Science+Business Media B.V., 1996, 21-30 p.Chapter in book (Refereed)
    Abstract [en]

    This work presents a novel technique for preprocessing fingerprint images. The method is based on the measurements of second moment descriptors and shape adaptation of scale-space operators with automatic scale selection (Lindeberg 1994). This procedure, which has been successfully used in the context of shape-from-texture and shape from disparity gradients, has several advantages when applied to fingerprint image enhancement, as observed by (Weickert 1995). For example, it is capable of joining interrupted ridges, and enforces continuity of their directional fields.

    In this work, these abovementioned general ideas are applied and extended in the following ways: Two methods for estimating local ridge width are explored and tuned to the problem of fingerprint enhancement. A ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model. This information is used for guiding a scale-selection mechanism, and for spreading the results of shape adaptation into noisy areas.

    The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. To a large extent, the scheme has the desirable property of joining interrupted lines without destroying essential singularities such as branching points. Thus, the result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a smoothed grey-level version of the input image.

    A detailed experimental evaluation is presented, including a comparison with other techniques. We propose that the techniques presented provide mechanisms of interest to developers of automatic fingerprint identification systems.

  • 31.
    Almgren, K.M
    et al.
    STFI-Packforsk AB.
    Gamstedt, E.K.
    Department of Polymer and Fibre Technology, Royal Institute of Technology .
    Nygård, P.
    PFI Paper and Fibre Research Institute.
    Malmberg, Filip
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindström, M.
    STFI-Packforsk AB.
    Role of fibre–fibre and fibre–matrix adhesion in stress transfer in composites made from resin-impregnated paper sheets2009In: International Journal of Adhesion and Adhesives, ISSN 0143-7496, E-ISSN 1879-0127, Vol. 29, no 5, 551-557 p.Article in journal (Refereed)
    Abstract [en]

    Paper-reinforced plastics are gaining increased interest as packaging materials, where mechanical properties are of great importance. Strength and stress transfer in paper sheets are controlled by fibre–fibre bonds. In paper-reinforced plastics, where the sheet is impregnated with a polymer resin, other stress-transfer mechanisms may be more important. The influence of fibre–fibre bonds on the strength of paper-reinforced plastics was therefore investigated. Paper sheets with different degrees of fibre–fibre bonding were manufactured and used as reinforcement in a polymeric matrix. Image analysis tools were used to verify that the difference in the degree of fibre–fibre bonding had been preserved in the composite materials. Strength and stiffness of the composites were experimentally determined and showed no correlation to the degree of fibre–fibre bonding, in contrast to the behaviour of unimpregnated paper sheets. The degree of fibre–fibre bonding is therefore believed to have little importance in this type of material, where stress is mainly transferred through the fibre–matrix interface.

  • 32.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ekekrantz, Johan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario2015In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, 5678-5685 p.Conference paper (Refereed)
    Abstract [en]

    We present a novel method for clustering segmented dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. We segment areas of interest in the scene using scene differencing for change detection. We extend the Meta-Room method and evaluate the performance on a complex dataset acquired autonomously by a mobile robot over a period of 30 days. We use an initial clustering method to group the segmented parts based on appearance and shape, and we further combine the clusters we obtain by analyzing their spatial-temporal behaviors. We show that using the spatial-temporal information further increases the matching accuracy.

  • 33.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised object segmentation through change detection in a long term autonomy scenario2016In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, 1181-1187 p.Conference paper (Refereed)
    Abstract [en]

    In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

  • 34.
    Ammenberg, P.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of CASI Data - A Case Study From the Archipelago of Stockholm, Sweden2001In: 6th International Conference, Remote Sensing for Marine and Coastal Environments 2000, Charleston, South Caro, 2001, 8 pages- p.Conference paper (Other scientific)
  • 35.
    Ammenberg, P.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of CASI data - A case study from the archipelago of Stockholm, Sweden2000In: 6th International Conference, Remote Sensing for Marine and CoastalEnvironments, Charleston, South Carolina, USA, 2000Conference paper (Other scientific)
  • 36.
    Ammenberg, P.
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Biology, Department of Ecology and Evolution, Limnology. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Flink, P
    Lindell, T.
    Strömbeck, N.
    Bio-optical Modelling Combined with Remote Sensing to Assess Water Quality2002In: International Journal of Remote Sensing, ISSN 0143-1161, Vol. 23, no 8, 1621-1638 p.Article in journal (Refereed)
  • 37.
    Ammenberg, Petra
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindell, Tommy
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Automated change detection of bleached coral reef areas2002In: Proceedings of 7th International Conference, Remote Sensing for Marine and Coastal Environments, 2002Conference paper (Other academic)
    Abstract [en]

    Recent dramatic bleaching events on coral reefs have enhanced the need for global environmental monitoring. This paper investigates the value of present high spatial resolution satellites to detect coral bleaching using a change detection technique. We compared an IRS LISS-III image taken during the 1998 bleaching event in Belize to images taken before the bleaching event. The sensitivity of the sensors was investigated and a simulation was made to estimate the effect of sub-pixel changes. A manual interpretation of coral bleaching, based on differences between the images, was performed and the outcome were compared to field observations. The spectral characteristics of the pixels corresponding to the field observations and the manually interpreted bleachings have been analysed and compared to pixels from unaffected areas.

  • 38.
    Andersson, Adam
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Range Gated Viewing with Underwater Camera2005Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this master thesis, performed at FOI, was to evaluate a range gated underwater camera, for the application identification of bottom objects. The master thesis was supported by FMV within the framework of “arbetsorder Systemstöd minjakt (Jan Andersson, KC Vapen)”. The central part has been field trials, which have been performed in both turbid and clear water. Conclusions about the performance of the camera system have been done, based on resolution and contrast measurements during the field trials. Laboratory testing has also been done to measure system specific parameters, such as the effective gate profile and camera gate distances.

    The field trials shows that images can be acquired at significantly longer distances with the tested gated camera, compared to a conventional video camera. The distance where the target can be detected is increased by a factor of 2. For images suitable for mine identification, the increase is about 1.3. However, studies of the performance of other range gated systems shows that the increase in range for mine identification can be about 1.6. Gated viewing has also been compared to other technical solutions for underwater imaging.

  • 39.
    Andersson, Anna
    et al.
    Linköping University, Department of Science and Technology.
    Eklund, Klara
    Linköping University, Department of Science and Technology.
    A Study of Oriented Mottle in Halftone Print2007Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [en]

    Coated solid bleached board belongs to the top-segment of paperboards. One important property of paperboard is the printability. In this diploma work a specific print defect, oriented mottle, has been studied in association with Iggesund Paperboard. The objectives of the work were to develop a method for analysis of the dark and light areas of oriented mottle, to analyse these areas, and to clarify the effect from the print, coating and paperboard surface related factors. This would clarify the origin of oriented mottle and predict oriented mottle on unprinted paperboard. The objectives were fulfilled by analysing the areas between the dark halftone dots, the amount of coating and the ink penetration, the micro roughness and the topography. The analysis of the areas between the dark halftone dots was performed on several samples and the results were compared regarding different properties. The other methods were only applied on a limited selection of samples. The results from the study showed that the intensity differences between the dark halftone dots were enhanced in the dark areas, the coating amount was lower in the dark areas and the ink did not penetrate into the paperboard. The other results showed that areas with high transmission corresponded to dark areas, smoother micro roughness, lower coating amount and high topography. A combination of the information from these properties might be used to predict oriented mottle. The oriented mottle is probably an optical phenomenon in half tone prints, and originates from variations in the coating and other paperboard properties.

  • 40.
    Andersson, Carina
    Mälardalen University, School of Innovation, Design and Engineering.
    Informationsdesign i tillståndsövervakning: En studie av ett bildskärmsbaserat användargränssnitt för tillståndsövervakning och tillståndsbaserat underhåll2010Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This research concerns the information design and visual design of graphical user interfaces (GUI) in the condition monitoring and condition-based maintenance (CBM) of production equipment. It also concerns various communicative aspects of a GUI, which is used to monitor the condition of assets. It applies to one Swedish vendor and its intentions to design information. In addition, it applies to the interaction between the GUI and its individual visual elements, as well as the communication between the GUI and the users (in four Swedish paper mills).

    The research is performed as a single case study. Interviews and observations have been the main methods for data collection. Empirical data is analyzed with methods inferred to semiotics, rhetoric and narratology. Theories in information science and regarding remediation are used to interpret the user interface design.

    The key conclusion is that there are no less than five different forms of information, all important when determining the conditions of assets. These information forms include the words, images and shapes in the GUI, the machine components and peripherals equipment, the information that takes form when personnel communicate machine conditions, the personnel’s subjective associations, and the information forms that relate to the personnel's actions and interactions.

    Preventive technicians interpret the GUI-information individually and collectively in relation to these information forms, which influence their interpretation and understanding of the GUI information. Social media in the GUI makes it possible to represent essential information that takes form when employees communicate a machine’s condition. Photographs may represent information forms as a machine’s components, peripherals, and local environment change over time. Moreover, preventative technicians may use diagrams and photographs in the GUI to change attitudes among the personnel at the mills and convince them, for example, of a machine’s condition or the effectiveness of CBM as maintenance policy.

  • 41.
    Andersson, Christian
    Linköping University, Department of Electrical Engineering.
    Simulering av filtrerade skärmfärger2005Independent thesis Basic level (professional degree), 20 points / 30 hpStudent thesis
    Abstract [en]

    This report present a working model for simulation of what happens to colors displayed on screens when they are observed through optical filters. The results of the model can be used to visually, on one screen, simulate another screen with an applied optical filter. The model can also produce CIE color difference values for the simulated screen colors. The model is data driven and requires spectral measurements for at least the screen to be simulated and the physical filters that will be used. The model is divided into three separate modules or steps where each of the modules can be easily replaced by alternative implementations or solutions. Results from tests performed show that the model can be used for prototyping of optical filters even though the tests of the specific algorithms chosen show there is room for improvements in quality. There is nothing that indicates that future work with this model would not produce better quality in its results.

  • 42. Andersson, Jan-Olov
    et al.
    Hasselid, Sara
    Widen, Per
    Bax, Gerhard
    Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences. Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences, Environment and Landscape Dynamics. ELD.
    Is the Snow Leopard (Unica unica) endangered?: A study of popular viability and distribution using vulnerability and GIS analysis methods2004In: Proceedings of the 7th International Symposium on High Mountain Remote Sensing Cartography, 2004, 224- p.Conference paper (Refereed)
  • 43.
    Andersson, Jonathan
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Radiology, Oncology and Radiation Science, Radiology.
    Methods for automatic analysis of glucose uptake in adipose tissue using quantitative PET/MRI data2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Brown adipose tissue (BAT) is the main tissue involved in non-shivering heat production. A greater understanding of BAT could possibly lead to new ways of prevention and treatment of obesity and type 2 diabetes. The increasing prevalence of these conditions and the problems they cause society and individuals make the study of the subject important.

    An ongoing study performed at the Turku University Hospital uses images acquired using PET/MRI with 18F-FDG as the tracer. Scans are performed on sedentary and athlete subjects during normal room temperature and during cold stimulation. Sedentary subjects then undergo scanning during cold stimulation again after a six weeks long exercise training intervention. This degree project used images from this study.

    The objective of this degree project was to examine methods to automatically and objectively quantify parameters relevant for activation of BAT in combined PET/MRI data. A secondary goal was to create images showing glucose uptake changes in subjects from images taken at different times.

    Parameters were quantified in adipose tissue directly without registration (image matching), and for neck scans also after registration. Results for the first three subjects who have completed the study are presented. Larger registration errors were encountered near moving organs and in regions with less information.

    The creation of images showing changes in glucose uptake seem to be working well for the neck scans, and somewhat well for other sub-volumes. These images can be useful for identification of BAT. Examples of these images are shown in the report.

  • 44.
    Andersson, Maria
    et al.
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ntalampiras, Stavros
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Ganchev, Todor
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Rydell, Joakim
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Fakotakis, Nikos
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Fusion of Acoustic and Optical Sensor Data for Automatic Fight Detection in Urban Environments2010In: Information Fusion (FUSION), 2010 13th Conference on, IEEE conference proceedings, 2010, 1-8 p.Conference paper (Refereed)
    Abstract [en]

    We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modalityspecific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor. 

  • 45.
    Andersson, Maria
    et al.
    FOI Swedish Defence Research Agency.
    Rydell, Joakim
    FOI Swedish Defence Research Agency.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. FOI Swedish Defence Research Agency.
    Estimation of crowd behaviour using sensor networks and sensor fusion2009Conference paper (Refereed)
    Abstract [en]

    Commonly, surveillance operators are today monitoring a large number of CCTV screens, trying to solve the complex cognitive tasks of analyzing crowd behavior and detecting threats and other abnormal behavior. Information overload is a rule rather than an exception. Moreover, CCTV footage lacks important indicators revealing certain threats, and can also in other respects be complemented by data from other sensors. This article presents an approach to automatically interpret sensor data and estimate behaviors of groups of people in order to provide the operator with relevant warnings. We use data from distributed heterogeneous sensors (visual cameras and a thermal infrared camera), and process the sensor data using detection algorithms. The extracted features are fed into a hidden Markov model in order to model normal behavior and detect deviations. We also discuss the use of radars for weapon detection.

  • 46.
    Andersson, Olov
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Heintz, Fredrik
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization2015In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) / [ed] Blai Bonet and Sven Koenig, AAAI Press, 2015, 2497-2503 p.Conference paper (Refereed)
    Abstract [en]

    Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.

  • 47.
    Andersson, Olov
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems.
    Wzorek, Mariusz
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems.
    Deep Learning Quadcopter Control via Risk-Aware Active Learning2017In: Thirty-First AAAI Conference on Artificial Intelligence (AAAI), 2017, San Francisco, February 4–9., 2017Conference paper (Refereed)
  • 48.
    Andersson, Robert
    Linköping University, Department of Electrical Engineering.
    A calibration method for laser-triangulating 3D cameras2008Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A laser-triangulating range camera uses a laser plane to light an object. If the position of the laser relative to the camera as well as certrain properties of the camera is known, it is possible to calculate the coordinates for all points along the profile of the object. If either the object or the camera and laser has a known motion, it is possible to combine several measurements to get a three-dimensional view of the object.

    Camera calibration is the process of finding the properties of the camera and enough information about the setup so that the desired coordinates can be calculated. Several methods for camera calibration exist, but this thesis proposes a new method that has the advantages that the objects needed are relatively inexpensive and that only objects in the laser plane need to be observed. Each part of the method is given a thorough description. Several mathematical derivations have also been added as appendices for completeness.

    The proposed method is tested using both synthetic and real data. The results show that the method is suitable even when high accuracy is needed. A few suggestions are also made about how the method can be improved further.

  • 49.
    Anliot, Manne
    Linköping University, Department of Electrical Engineering.
    Volume Estimation of Airbags: A Visual Hull Approach2005Independent thesis Basic level (professional degree), 20 points / 30 hpStudent thesis
    Abstract [en]

    This thesis presents a complete and fully automatic method for estimating the volume of an airbag, through all stages of its inflation, with multiple synchronized high-speed cameras.

    Using recorded contours of the inflating airbag, its visual hull is reconstructed with a novel method: The intersections of all back-projected contours are first identified with an accelerated epipolar algorithm. These intersections, together with additional points sampled from concave surface regions of the visual hull, are then Delaunay triangulated to a connected set of tetrahedra. Finally, the visual hull is extracted by carving away the tetrahedra that are classified as inconsistent with the contours, according to a voting procedure.

    The volume of an airbag's visual hull is always larger than the airbag's real volume. By projecting a known synthetic model of the airbag into the cameras, this volume offset is computed, and an accurate estimate of the real airbag volume is extracted.

    Even though volume estimates can be computed for all camera setups, the cameras should be specially posed to achieve optimal results. Such poses are uniquely found for different airbag models with a separate, fully automatic, simulated annealing algorithm.

    Satisfying results are presented for both synthetic and real-world data.

  • 50. Arcelli, Carlo
    et al.
    Sanniti di Baja, Gabriella
    Svensson, Stina
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computing and analysing convex deficiencies to characterise 3D complex objects2005In: Image and Vision Computing: Discrete Geometry for Computer Imagery, Vol. 23, no 2, 203-211 p.Article in journal (Refereed)
    Abstract [en]

    Entities such as object components, cavities, tunnels and concavities in 3D digital images can be useful in the framework of object analysis. For each object component, we first identify its convex deficiencies, by subtracting the object component from a covering polyhedron approximating the convex hull. Watershed segmentation is then used to decompose complex convex deficiencies into simpler parts, corresponding to individual cavities, concavities and tunnels of the object component. These entities are finally described by means of a representation system accounting for the shape features characterising them.

1234567 1 - 50 of 1473
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf