Change search
Refine search result
1234567 51 - 100 of 1956
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51. Almansa, A.
    et al.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Fingerprint enhancement by shape adaptation of scale-space operators with automatic scale selection2000In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 9, no 12, p. 2027-2042Article in journal (Refereed)
    Abstract [en]

    This work presents two mechanisms for processing fingerprint images; shape-adapted smoothing based on second moment descriptors and automatic scale selection based on normalized derivatives. The shape adaptation procedure adapts the smoothing operation to the local ridge structures, which allows interrupted ridges to be joined without destroying essential singularities such as branching points and enforces continuity of their directional fields. The Scale selection procedure estimates local ridge width and adapts the amount of smoothing to the local amount of noise. In addition, a ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model, and is used for spreading the results of shape adaptation into noisy areas. The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. The result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a Smoothed grey-level version of the input image. We propose that these general techniques should be of interest to developers of automatic fingerprint identification systems as well as in other applications of processing related types of imagery.

  • 52. Almansa, Andrés
    et al.
    Lindeberg, Tony
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Enhancement of Fingerprint Images by Shape-Adapted Scale-Space Operators1996In: Gaussian Scale-Space Theory. Part I: Proceedings of PhD School on Scale-Space Theory (Copenhagen, Denmark) May 1996 / [ed] J. Sporring, M. Nielsen, L. Florack, and P. Johansen, Springer Science+Business Media B.V., 1996, p. 21-30Chapter in book (Refereed)
    Abstract [en]

    This work presents a novel technique for preprocessing fingerprint images. The method is based on the measurements of second moment descriptors and shape adaptation of scale-space operators with automatic scale selection (Lindeberg 1994). This procedure, which has been successfully used in the context of shape-from-texture and shape from disparity gradients, has several advantages when applied to fingerprint image enhancement, as observed by (Weickert 1995). For example, it is capable of joining interrupted ridges, and enforces continuity of their directional fields.

    In this work, these abovementioned general ideas are applied and extended in the following ways: Two methods for estimating local ridge width are explored and tuned to the problem of fingerprint enhancement. A ridgeness measure is defined, which reflects how well the local image structure agrees with a qualitative ridge model. This information is used for guiding a scale-selection mechanism, and for spreading the results of shape adaptation into noisy areas.

    The combined approach makes it possible to resolve fine scale structures in clear areas while reducing the risk of enhancing noise in blurred or fragmented areas. To a large extent, the scheme has the desirable property of joining interrupted lines without destroying essential singularities such as branching points. Thus, the result is a reliable and adaptively detailed estimate of the ridge orientation field and ridge width, as well as a smoothed grey-level version of the input image.

    A detailed experimental evaluation is presented, including a comparison with other techniques. We propose that the techniques presented provide mechanisms of interest to developers of automatic fingerprint identification systems.

  • 53.
    Almgren, K.M
    et al.
    STFI-Packforsk AB.
    Gamstedt, E.K.
    Department of Polymer and Fibre Technology, Royal Institute of Technology .
    Nygård, P.
    PFI Paper and Fibre Research Institute.
    Malmberg, Filip
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindström, M.
    STFI-Packforsk AB.
    Role of fibre–fibre and fibre–matrix adhesion in stress transfer in composites made from resin-impregnated paper sheets2009In: International Journal of Adhesion and Adhesives, ISSN 0143-7496, E-ISSN 1879-0127, Vol. 29, no 5, p. 551-557Article in journal (Refereed)
    Abstract [en]

    Paper-reinforced plastics are gaining increased interest as packaging materials, where mechanical properties are of great importance. Strength and stress transfer in paper sheets are controlled by fibre–fibre bonds. In paper-reinforced plastics, where the sheet is impregnated with a polymer resin, other stress-transfer mechanisms may be more important. The influence of fibre–fibre bonds on the strength of paper-reinforced plastics was therefore investigated. Paper sheets with different degrees of fibre–fibre bonding were manufactured and used as reinforcement in a polymeric matrix. Image analysis tools were used to verify that the difference in the degree of fibre–fibre bonding had been preserved in the composite materials. Strength and stiffness of the composites were experimentally determined and showed no correlation to the degree of fibre–fibre bonding, in contrast to the behaviour of unimpregnated paper sheets. The degree of fibre–fibre bonding is therefore believed to have little importance in this type of material, where stress is mainly transferred through the fibre–matrix interface.

  • 54.
    Almqvist, Håkan
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Kucner, Tomasz Piotr
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Learning to detect misaligned point clouds2018In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 35, no 5, p. 662-677Article in journal (Refereed)
    Abstract [en]

    Matching and merging overlapping point clouds is a common procedure in many applications, including mobile robotics, three-dimensional mapping, and object visualization. However, fully automatic point-cloud matching, without manual verification, is still not possible because no matching algorithms exist today that can provide any certain methods for detecting misaligned point clouds. In this article, we make a comparative evaluation of geometric consistency methods for classifying aligned and nonaligned point-cloud pairs. We also propose a method that combines the results of the evaluated methods to further improve the classification of the point clouds. We compare a range of methods on two data sets from different environments related to mobile robotics and mapping. The results show that methods based on a Normal Distributions Transform representation of the point clouds perform best under the circumstances presented herein.

  • 55.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eye Detection by Complex Filtering for Periocular Recognition2014In: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, article id 6914250Conference paper (Refereed)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

  • 56.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016In: 2016 4th International Workshop on Biometrics and Forensics (IWBF): Proceedings : 3-4 March, 2016, Limassol, Cyprus, Piscataway, NJ: IEEE, 2016, article id 7449688Conference paper (Refereed)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.

  • 57.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE Computer Society, 2018, p. 536-541Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

  • 58.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018In: Proceedings. The 14th International Conference on Signal Image Technology & Internet Based Systems: SITIS 2018 / [ed] DiBaja, G. S., Gallo, L., Yetongnon, K., Dipanda, A., CastrillonSantana, M., Chbeir, R., Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 536-541Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available. ©2018 IEEE

  • 59.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, article id 7791208Conference paper (Refereed)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 60.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Compact Multi-scale Periocular Recognition Using SAFE Features2017Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided.

  • 61.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Unsupervised construction of 4D semantic maps in a long-term autonomy scenario2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Robots are operating for longer times and collecting much more data than just a few years ago. In this setting we are interested in exploring ways of modeling the environment, segmenting out areas of interest and keeping track of the segmentations over time, with the purpose of building 4D models (i.e. space and time) of the relevant parts of the environment.

    Our approach relies on repeatedly observing the environment and creating local maps at specific locations. The first question we address is how to choose where to build these local maps. Traditionally, an operator defines a set of waypoints on a pre-built map of the environment which the robot visits autonomously. Instead, we propose a method to automatically extract semantically meaningful regions from a point cloud representation of the environment. The resulting segmentation is purely geometric, and in the context of mobile robots operating in human environments, the semantic label associated with each segment (i.e. kitchen, office) can be of interest for a variety of applications. We therefore also look at how to obtain per-pixel semantic labels given the geometric segmentation, by fusing probabilistic distributions over scene and object types in a Conditional Random Field.

    For most robotic systems, the elements of interest in the environment are the ones which exhibit some dynamic properties (such as people, chairs, cups, etc.), and the ability to detect and segment such elements provides a very useful initial segmentation of the scene. We propose a method to iteratively build a static map from observations of the same scene acquired at different points in time. Dynamic elements are obtained by computing the difference between the static map and new observations. We address the problem of clustering together dynamic elements which correspond to the same physical object, observed at different points in time and in significantly different circumstances. To address some of the inherent limitations in the sensors used, we autonomously plan, navigate around and obtain additional views of the segmented dynamic elements. We look at methods of fusing the additional data and we show that both a combined point cloud model and a fused mesh representation can be used to more robustly recognize the dynamic object in future observations. In the case of the mesh representation, we also show how a Convolutional Neural Network can be trained for recognition by using mesh renderings.

    Finally, we present a number of methods to analyse the data acquired by the mobile robot autonomously and over extended time periods. First, we look at how the dynamic segmentations can be used to derive a probabilistic prior which can be used in the mapping process to further improve and reinforce the segmentation accuracy. We also investigate how to leverage spatial-temporal constraints in order to cluster dynamic elements observed at different points in time and under different circumstances. We show that by making a few simple assumptions we can increase the clustering accuracy even when the object appearance varies significantly between observations. The result of the clustering is a spatial-temporal footprint of the dynamic object, defining an area where the object is likely to be observed spatially as well as a set of time stamps corresponding to when the object was previously observed. Using this data, predictive models can be created and used to infer future times when the object is more likely to be observed. In an object search scenario, this model can be used to decrease the search time when looking for specific objects.

  • 62.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Autonomous meshing, texturing and recognition of objectmodels with a mobile robot2017Conference paper (Refereed)
    Abstract [en]

    We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

  • 63.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Claici, Sebastian
    Wendt, Axel
    Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 749-756Article in journal (Refereed)
    Abstract [en]

    We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.

  • 64.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ekekrantz, Johan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario2015In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, p. 5678-5685Conference paper (Refereed)
    Abstract [en]

    We present a novel method for clustering segmented dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. We segment areas of interest in the scene using scene differencing for change detection. We extend the Meta-Room method and evaluate the performance on a complex dataset acquired autonomously by a mobile robot over a period of 30 days. We use an initial clustering method to group the segmented parts based on appearance and shape, and we further combine the clusters we obtain by analyzing their spatial-temporal behaviors. We show that using the spatial-temporal information further increases the matching accuracy.

  • 65.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised object segmentation through change detection in a long term autonomy scenario2016In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, p. 1181-1187Conference paper (Refereed)
    Abstract [en]

    In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

  • 66.
    Amigoni, Francesco
    et al.
    Politecnico di Milano, Milan, Italy.
    Yu, Wonpil
    Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea.
    Andre, Torsten
    University of Klagenfurt, Klagenfurt, Austria.
    Holz, Dirk
    University of Bonn, Bonn, Germany.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Matteucci, Matteo
    Politecnico di Milano, Milan, Italy.
    Moon, Hyungpil
    Sungkyunkwan University, Suwon, South Korea.
    Yokozuka, Masashi
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Biggs, Geoffrey
    Nat. Inst. of Advanced Industrial Science and Technology, Tsukuba, Japan.
    Madhavan, Raj
    Amrita University, Clarksburg MD, United States of America.
    A Standard for Map Data Representation: IEEE 1873-2015 Facilitates Interoperability Between Robots2018In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 25, no 1, p. 65-76Article in journal (Refereed)
    Abstract [en]

    The availability of environment maps for autonomous robots enables them to complete several tasks. A new IEEE standard, IEEE 1873-2015, Robot Map Data Representation for Navigation (MDR) [15], sponsored by the IEEE Robotics and Automation Society (RAS) and approved by the IEEE Standards Association Standards Board in September 2015, defines a common representation for two-dimensional (2-D) robot maps and is intended to facilitate interoperability among navigating robots. The standard defines an extensible markup language (XML) data format for exchanging maps between different systems. This article illustrates how metric maps, topological maps, and their combinations can be represented according to the standard.

  • 67.
    Ammenberg, P.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of CASI Data - A Case Study From the Archipelago of Stockholm, Sweden2001In: 6th International Conference, Remote Sensing for Marine and Coastal Environments 2000, Charleston, South Caro, 2001, p. 8 pages-Conference paper (Other scientific)
  • 68.
    Ammenberg, P.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of CASI data - A case study from the archipelago of Stockholm, Sweden2000In: 6th International Conference, Remote Sensing for Marine and CoastalEnvironments, Charleston, South Carolina, USA, 2000Conference paper (Other scientific)
  • 69.
    Ammenberg, P.
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Biology, Department of Ecology and Evolution, Limnology. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Flink, P
    Lindell, T.
    Strömbeck, N.
    Bio-optical Modelling Combined with Remote Sensing to Assess Water Quality2002In: International Journal of Remote Sensing, ISSN 0143-1161, Vol. 23, no 8, p. 1621-1638Article in journal (Refereed)
  • 70.
    Ammenberg, Petra
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindell, Tommy
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Automated change detection of bleached coral reef areas2002In: Proceedings of 7th International Conference, Remote Sensing for Marine and Coastal Environments, 2002Conference paper (Other academic)
    Abstract [en]

    Recent dramatic bleaching events on coral reefs have enhanced the need for global environmental monitoring. This paper investigates the value of present high spatial resolution satellites to detect coral bleaching using a change detection technique. We compared an IRS LISS-III image taken during the 1998 bleaching event in Belize to images taken before the bleaching event. The sensitivity of the sensors was investigated and a simulation was made to estimate the effect of sub-pixel changes. A manual interpretation of coral bleaching, based on differences between the images, was performed and the outcome were compared to field observations. The spectral characteristics of the pixels corresponding to the field observations and the manually interpreted bleachings have been analysed and compared to pixels from unaffected areas.

  • 71.
    Amundin, Mats
    et al.
    Kolmården Wildlife Park.
    Hållsten, Henrik
    Filosofiska institutionen, Stockholms universitet.
    Eklund, Robert
    Linköping University, Department of Culture and Communication, Language and Culture. Linköping University, Faculty of Arts and Sciences.
    Karlgren, Jussi
    Kungliga Tekniska Högskolan.
    Molinder, Lars
    Carnegie Investment Bank, Swedden.
    A proposal to use distributional models to analyse dolphin vocalisation2017In: Proceedings of the 1st International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, VIHAR 2017 / [ed] Angela Dassow, Ricard Marxer & Roger K. Moore, 2017, p. 31-32Conference paper (Refereed)
    Abstract [en]

    This paper gives a brief introduction to the starting points of an experimental project to study dolphin communicative behaviour using distributional semantics, with methods implemented for the large scale study of human language.

  • 72.
    Andersson, Adam
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Range Gated Viewing with Underwater Camera2005Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this master thesis, performed at FOI, was to evaluate a range gated underwater camera, for the application identification of bottom objects. The master thesis was supported by FMV within the framework of “arbetsorder Systemstöd minjakt (Jan Andersson, KC Vapen)”. The central part has been field trials, which have been performed in both turbid and clear water. Conclusions about the performance of the camera system have been done, based on resolution and contrast measurements during the field trials. Laboratory testing has also been done to measure system specific parameters, such as the effective gate profile and camera gate distances.

    The field trials shows that images can be acquired at significantly longer distances with the tested gated camera, compared to a conventional video camera. The distance where the target can be detected is increased by a factor of 2. For images suitable for mine identification, the increase is about 1.3. However, studies of the performance of other range gated systems shows that the increase in range for mine identification can be about 1.6. Gated viewing has also been compared to other technical solutions for underwater imaging.

  • 73.
    Andersson, Anna
    et al.
    Linköping University, Department of Science and Technology.
    Eklund, Klara
    Linköping University, Department of Science and Technology.
    A Study of Oriented Mottle in Halftone Print2007Independent thesis Advanced level (degree of Magister), 20 points / 30 hpStudent thesis
    Abstract [en]

    Coated solid bleached board belongs to the top-segment of paperboards. One important property of paperboard is the printability. In this diploma work a specific print defect, oriented mottle, has been studied in association with Iggesund Paperboard. The objectives of the work were to develop a method for analysis of the dark and light areas of oriented mottle, to analyse these areas, and to clarify the effect from the print, coating and paperboard surface related factors. This would clarify the origin of oriented mottle and predict oriented mottle on unprinted paperboard. The objectives were fulfilled by analysing the areas between the dark halftone dots, the amount of coating and the ink penetration, the micro roughness and the topography. The analysis of the areas between the dark halftone dots was performed on several samples and the results were compared regarding different properties. The other methods were only applied on a limited selection of samples. The results from the study showed that the intensity differences between the dark halftone dots were enhanced in the dark areas, the coating amount was lower in the dark areas and the ink did not penetrate into the paperboard. The other results showed that areas with high transmission corresponded to dark areas, smoother micro roughness, lower coating amount and high topography. A combination of the information from these properties might be used to predict oriented mottle. The oriented mottle is probably an optical phenomenon in half tone prints, and originates from variations in the coating and other paperboard properties.

  • 74.
    Andersson, Axel
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Real-Time Feedback for Agility Training: Tracking of reflective markers using a time-of-flight camera2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 75.
    Andersson, Carina
    Mälardalen University, School of Innovation, Design and Engineering.
    Informationsdesign i tillståndsövervakning: En studie av ett bildskärmsbaserat användargränssnitt för tillståndsövervakning och tillståndsbaserat underhåll2010Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This research concerns the information design and visual design of graphical user interfaces (GUI) in the condition monitoring and condition-based maintenance (CBM) of production equipment. It also concerns various communicative aspects of a GUI, which is used to monitor the condition of assets. It applies to one Swedish vendor and its intentions to design information. In addition, it applies to the interaction between the GUI and its individual visual elements, as well as the communication between the GUI and the users (in four Swedish paper mills).

    The research is performed as a single case study. Interviews and observations have been the main methods for data collection. Empirical data is analyzed with methods inferred to semiotics, rhetoric and narratology. Theories in information science and regarding remediation are used to interpret the user interface design.

    The key conclusion is that there are no less than five different forms of information, all important when determining the conditions of assets. These information forms include the words, images and shapes in the GUI, the machine components and peripherals equipment, the information that takes form when personnel communicate machine conditions, the personnel’s subjective associations, and the information forms that relate to the personnel's actions and interactions.

    Preventive technicians interpret the GUI-information individually and collectively in relation to these information forms, which influence their interpretation and understanding of the GUI information. Social media in the GUI makes it possible to represent essential information that takes form when employees communicate a machine’s condition. Photographs may represent information forms as a machine’s components, peripherals, and local environment change over time. Moreover, preventative technicians may use diagrams and photographs in the GUI to change attitudes among the personnel at the mills and convince them, for example, of a machine’s condition or the effectiveness of CBM as maintenance policy.

  • 76.
    Andersson, Christian
    Linköping University, Department of Electrical Engineering.
    Simulering av filtrerade skärmfärger2005Independent thesis Basic level (professional degree), 20 points / 30 hpStudent thesis
    Abstract [en]

    This report present a working model for simulation of what happens to colors displayed on screens when they are observed through optical filters. The results of the model can be used to visually, on one screen, simulate another screen with an applied optical filter. The model can also produce CIE color difference values for the simulated screen colors. The model is data driven and requires spectral measurements for at least the screen to be simulated and the physical filters that will be used. The model is divided into three separate modules or steps where each of the modules can be easily replaced by alternative implementations or solutions. Results from tests performed show that the model can be used for prototyping of optical filters even though the tests of the specific algorithms chosen show there is room for improvements in quality. There is nothing that indicates that future work with this model would not produce better quality in its results.

  • 77. Andersson, Jan-Olov
    et al.
    Hasselid, Sara
    Widen, Per
    Bax, Gerhard
    Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences. Uppsala University, Teknisk-naturvetenskapliga vetenskapsområdet, Earth Sciences, Department of Earth Sciences, Environment and Landscape Dynamics. ELD.
    Is the Snow Leopard (Unica unica) endangered?: A study of popular viability and distribution using vulnerability and GIS analysis methods2004In: Proceedings of the 7th International Symposium on High Mountain Remote Sensing Cartography, 2004, p. 224-Conference paper (Refereed)
  • 78.
    Andersson, Jonathan
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Radiology, Oncology and Radiation Science, Radiology.
    Methods for automatic analysis of glucose uptake in adipose tissue using quantitative PET/MRI data2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Brown adipose tissue (BAT) is the main tissue involved in non-shivering heat production. A greater understanding of BAT could possibly lead to new ways of prevention and treatment of obesity and type 2 diabetes. The increasing prevalence of these conditions and the problems they cause society and individuals make the study of the subject important.

    An ongoing study performed at the Turku University Hospital uses images acquired using PET/MRI with 18F-FDG as the tracer. Scans are performed on sedentary and athlete subjects during normal room temperature and during cold stimulation. Sedentary subjects then undergo scanning during cold stimulation again after a six weeks long exercise training intervention. This degree project used images from this study.

    The objective of this degree project was to examine methods to automatically and objectively quantify parameters relevant for activation of BAT in combined PET/MRI data. A secondary goal was to create images showing glucose uptake changes in subjects from images taken at different times.

    Parameters were quantified in adipose tissue directly without registration (image matching), and for neck scans also after registration. Results for the first three subjects who have completed the study are presented. Larger registration errors were encountered near moving organs and in regions with less information.

    The creation of images showing changes in glucose uptake seem to be working well for the neck scans, and somewhat well for other sub-volumes. These images can be useful for identification of BAT. Examples of these images are shown in the report.

  • 79.
    Andersson, Maria
    et al.
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ntalampiras, Stavros
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Ganchev, Todor
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Rydell, Joakim
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Ahlberg, Jörgen
    Division of Information Systems, FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Fakotakis, Nikos
    Department of Electrical and Computer Engineering, University of Patras, Patras, Greece.
    Fusion of Acoustic and Optical Sensor Data for Automatic Fight Detection in Urban Environments2010In: Information Fusion (FUSION), 2010 13th Conference on, IEEE conference proceedings, 2010, p. 1-8Conference paper (Refereed)
    Abstract [en]

    We propose a two-stage method for detection of abnormal behaviours, such as aggression and fights in urban environment, which is applicable to operator support in surveillance applications. The proposed method is based on fusion of evidence from audio and optical sensors. In the first stage, a number of modalityspecific detectors perform recognition of low-level events. Their outputs act as input to the second stage, which performs fusion and disambiguation of the firststage detections. Experimental evaluation on scenes from the outdoor part of the PROMETHEUS database demonstrated the practical viability of the proposed approach. We report a fight detection rate of 81% when both audio and optical information are used. Reduced performance is observed when evidence from audio data is excluded from the fusion process. Finally, in the case when only evidence from one camera is used for detecting the fights, the recognition performance is poor. 

  • 80.
    Andersson, Maria
    et al.
    FOI Swedish Defence Research Agency.
    Rydell, Joakim
    FOI Swedish Defence Research Agency.
    Ahlberg, Jörgen
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. FOI Swedish Defence Research Agency.
    Estimation of crowd behaviour using sensor networks and sensor fusion2009Conference paper (Refereed)
    Abstract [en]

    Commonly, surveillance operators are today monitoring a large number of CCTV screens, trying to solve the complex cognitive tasks of analyzing crowd behavior and detecting threats and other abnormal behavior. Information overload is a rule rather than an exception. Moreover, CCTV footage lacks important indicators revealing certain threats, and can also in other respects be complemented by data from other sensors. This article presents an approach to automatically interpret sensor data and estimate behaviors of groups of people in order to provide the operator with relevant warnings. We use data from distributed heterogeneous sensors (visual cameras and a thermal infrared camera), and process the sensor data using detection algorithms. The extracted features are fed into a hidden Markov model in order to model normal behavior and detect deviations. We also discuss the use of radars for weapon detection.

  • 81.
    Andersson, Olov
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Methods for Scalable and Safe Robot Learning2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Robots are increasingly expected to go beyond controlled environments in laboratories and factories, to enter real-world public spaces and homes. However, robot behavior is still usually engineered for narrowly defined scenarios. To manually encode robot behavior that works within complex real world environments, such as busy work places or cluttered homes, can be a daunting task. In addition, such robots may require a high degree of autonomy to be practical, which imposes stringent requirements on safety and robustness. \setlength{\parindent}{2em}\setlength{\parskip}{0em}The aim of this thesis is to examine methods for automatically learning safe robot behavior, lowering the costs of synthesizing behavior for complex real-world situations. To avoid task-specific assumptions, we approach this from a data-driven machine learning perspective. The strength of machine learning is its generality, given sufficient data it can learn to approximate any task. However, being embodied agents in the real-world, robots pose a number of difficulties for machine learning. These include real-time requirements with limited computational resources, the cost and effort of operating and collecting data with real robots, as well as safety issues for both the robot and human bystanders.While machine learning is general by nature, overcoming the difficulties with real-world robots outlined above remains a challenge. In this thesis we look for a middle ground on robot learning, leveraging the strengths of both data-driven machine learning, as well as engineering techniques from robotics and control. This includes combing data-driven world models with fast techniques for planning motions under safety constraints, using machine learning to generalize such techniques to problems with high uncertainty, as well as using machine learning to find computationally efficient approximations for use on small embedded systems.We demonstrate such behavior synthesis techniques with real robots, solving a class of difficult dynamic collision avoidance problems under uncertainty, such as induced by the presence of humans without prior coordination. Initially using online planning offloaded to a desktop CPU, and ultimately as a deep neural network policy embedded on board a 7 quadcopter.

  • 82.
    Andersson, Olov
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Heintz, Fredrik
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Intergrated Computer systems. Linköping University, The Institute of Technology.
    Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization2015In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) / [ed] Blai Bonet and Sven Koenig, AAAI Press, 2015, p. 2497-2503Conference paper (Refereed)
    Abstract [en]

    Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.

  • 83.
    Andersson, Olov
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Wzorek, Mariusz
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Deep Learning Quadcopter Control via Risk-Aware Active Learning2017In: Proceedings of The Thirty-first AAAI Conference on Artificial Intelligence (AAAI) / [ed] Satinder Singh and Shaul Markovitch, AAAI Press, 2017, Vol. 5, p. 3812-3818Conference paper (Refereed)
    Abstract [en]

    Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.

  • 84.
    Andersson, Robert
    Linköping University, Department of Electrical Engineering.
    A calibration method for laser-triangulating 3D cameras2008Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A laser-triangulating range camera uses a laser plane to light an object. If the position of the laser relative to the camera as well as certrain properties of the camera is known, it is possible to calculate the coordinates for all points along the profile of the object. If either the object or the camera and laser has a known motion, it is possible to combine several measurements to get a three-dimensional view of the object.

    Camera calibration is the process of finding the properties of the camera and enough information about the setup so that the desired coordinates can be calculated. Several methods for camera calibration exist, but this thesis proposes a new method that has the advantages that the objects needed are relatively inexpensive and that only objects in the laser plane need to be observed. Each part of the method is given a thorough description. Several mathematical derivations have also been added as appendices for completeness.

    The proposed method is tested using both synthetic and real data. The results show that the method is suitable even when high accuracy is needed. A few suggestions are also made about how the method can be improved further.

  • 85. Andrée, Martin
    et al.
    Paasch, Jesper M.
    Paulsson, Jenny
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    BIM and 3D property visualisation2018In: Proc. FIG Congress 2018, 2018, article id 9367Conference paper (Refereed)
  • 86.
    Anistratov, Pavel
    Linköping University, Department of Electrical Engineering, Vehicular Systems. Linköping University, Faculty of Science & Engineering.
    Computation of Autonomous Safety Maneuvers Using Segmentation and Optimization2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis studies motion planning for future autonomous vehicles with main focus on passenger cars. By having automatic steering and braking together with information about the environment, such as other participants in the traffic or obstacles, it would be possible to perform autonomous maneuvers while taking limitations of the vehicle and road–tire interaction into account. Motion planning is performed to find such maneuvers that bring the vehicle from the current state to a desired future state, here by formulating the motion-planning problem as an optimal control problem. There are a number of challenges for such an approach to motion planning; some of them are how to formulate the criterion in the motion planning (objective function in the corresponding optimal control problem), and how to make the solution of motion-planning problems efficient to be useful in online applications. These challenges are addressed in this thesis.

    As a criterion for motion-planning problems of passenger vehicles on doublelane roads, it is investigated to use a lane-deviation penalty function to capture the observation that it is dangerous to drive in the opposing lane, but safe to drive in the original lane after the obstacle. The penalty function is augmented with certain additional terms to address also the recovery behavior of the vehicle. The resulting formulation is shown to provide efficient and steady maneuvers and gives a lower time in the opposing lane compared to other objective functions. Under varying parameters of the scenario formulation, the resulting maneuvers are changing in a way that exhibits structured characteristics.

    As an approach to improve efficiency of computations for the motion-planning problem, it is investigated to segment motion planning of the full maneuver into several smaller maneuvers. A way to extract segments is considered from a vehicle dynamics point of view, and it is based on extrema of the vehicle orientation and the yaw rate. The segmentation points determined using this approach are observed to allow efficient splitting of the optimal control problem for the full maneuver into subproblems.

    Having a method to segment maneuvers, this thesis further studies methods to allow parallel computation of these maneuvers. One investigated method is based on Lagrange relaxation and duality decomposition. Smaller subproblems are formulated, which are governed by solving a low-complexity coordination problem. Lagrangian relaxation is performed on a subset of the dynamic constraints at the segmentation points, while the remaining variables are predicted. The prediction is possible because of the observed structured characteristics resulting from the used lane-deviation penalty function. An alternative approach is based on adoption of the alternating augmented Lagrangian method. Augmentation of the Lagrangian allows to apply relaxation for all dynamic constraints at the segmentation points, and the alternating approach makes it possible to decompose the full problem into subproblems and coordinating their solutions by analytically solving an overall coordination problem. The presented decomposition methods allow computation of maneuvers with high correspondence and lower computational times compared to the results obtained for solving the full maneuver in one step.

  • 87.
    Anliot, Manne
    Linköping University, Department of Electrical Engineering.
    Volume Estimation of Airbags: A Visual Hull Approach2005Independent thesis Basic level (professional degree), 20 points / 30 hpStudent thesis
    Abstract [en]

    This thesis presents a complete and fully automatic method for estimating the volume of an airbag, through all stages of its inflation, with multiple synchronized high-speed cameras.

    Using recorded contours of the inflating airbag, its visual hull is reconstructed with a novel method: The intersections of all back-projected contours are first identified with an accelerated epipolar algorithm. These intersections, together with additional points sampled from concave surface regions of the visual hull, are then Delaunay triangulated to a connected set of tetrahedra. Finally, the visual hull is extracted by carving away the tetrahedra that are classified as inconsistent with the contours, according to a voting procedure.

    The volume of an airbag's visual hull is always larger than the airbag's real volume. By projecting a known synthetic model of the airbag into the cameras, this volume offset is computed, and an accurate estimate of the real airbag volume is extracted.

    Even though volume estimates can be computed for all camera setups, the cameras should be specially posed to achieve optimal results. Such poses are uniquely found for different airbag models with a separate, fully automatic, simulated annealing algorithm.

    Satisfying results are presented for both synthetic and real-world data.

  • 88.
    Antonova, Rika
    et al.
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kokic, Mia
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception and Learning, CSC, Royal Institute of Technology, Stockholm, Sweden.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018In: Proceedings of Machine Learning Research: Conference on Robot Learning 2018, PMLR , 2018, Vol. 87, p. 641-650Conference paper (Refereed)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 89.
    Anwer, Rao Muhammad
    et al.
    Aalto Univ, Finland.
    Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Laaksonen, Jorma
    Aalto Univ, Finland.
    Two-Stream Part-based Deep Representation for Human Attribute Recognition2018In: 2018 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), IEEE , 2018, p. 90-97Conference paper (Refereed)
    Abstract [en]

    Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results.

  • 90. Arcelli, Carlo
    et al.
    Sanniti di Baja, Gabriella
    Svensson, Stina
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Computing and analysing convex deficiencies to characterise 3D complex objects2005In: Image and Vision Computing: Discrete Geometry for Computer Imagery, Vol. 23, no 2, p. 203-211Article in journal (Refereed)
    Abstract [en]

    Entities such as object components, cavities, tunnels and concavities in 3D digital images can be useful in the framework of object analysis. For each object component, we first identify its convex deficiencies, by subtracting the object component from a covering polyhedron approximating the convex hull. Watershed segmentation is then used to decompose complex convex deficiencies into simpler parts, corresponding to individual cavities, concavities and tunnels of the object component. These entities are finally described by means of a representation system accounting for the shape features characterising them.

  • 91.
    Arnekvist, Isac
    KTH, School of Computer Science and Communication (CSC).
    Reinforcement learning for robotic manipulation2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Reinforcement learning was recently successfully used for real-world robotic manipulation tasks, without the need for human demonstration, usinga normalized advantage function-algorithm (NAF). Limitations on the shape of the advantage function however poses doubts to what kind of policies can be learned using this method. For similar tasks, convolutional neural networks have been used for pose estimation from images taken with fixed position cameras. For some applications however, this might not be a valid assumption. It was also shown that the quality of policies for robotic tasks severely deteriorates from small camera offsets. This thesis investigates the use of NAF for a pushing task with clear multimodal properties. The results are compared with using a deterministic policy with minimal constraints on the Q-function surface. Methods for pose estimation using convolutional neural networks are further investigated, especially with regards to randomly placed cameras with unknown offsets. By defining the coordinate frame of objects with respect to some visible feature, it is hypothesized that relative pose estimation can be accomplished even when the camera is not fixed and the offset is unknown. NAF is successfully implemented to solve a simple reaching task on a real robotic system where data collection is distributed over several robots, and learning is done on a separate server. Using NAF to learn a pushing task fails to converge to a good policy, both on the real robots and in simulation. Deep deterministic policy gradient (DDPG) is instead used in simulation and successfully learns to solve the task. The learned policy is then applied on the real robots and accomplishes to solve the task in the real setting as well. Pose estimation from fixed position camera images is learned and the policy is still able to solve the task using these estimates. By defining a coordinate frame from an object visible to the camera, in this case the robot arm, a neural network learns to regress the pushable objects pose in this frame without the assumption of a fixed camera. However, the precision of the predictions were too inaccurate to be used for solving the pushing task. Further modifications to this approach could however show to be a feasible solution to randomly placed cameras with unknown poses.

  • 92.
    Arnekvist, Isac
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Stork, Johannes A.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Center for Applied Autonomous Sensor Systems, Örebro University, Sweden.
    Vpe: Variational policy embedding for transfer reinforcement learning2019In: 2019 International Conference on Robotics And Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 36-42Conference paper (Refereed)
    Abstract [en]

    Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments. We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

  • 93.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Örebro University, School of Science and Technology. Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2019In: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A; Althoefer, K; Arai, F; Arrichiello, F; Caputo, B; Castellanos, J; Hauser, K; Isler, V Kim, J; Liu, H; Oh, P; Santos, V; Scaramuzza, D; Ude, A; Voyles, R; Yamane, K; Okamura, A, IEEE , 2019, p. 36-42Conference paper (Refereed)
    Abstract [en]

    Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments.

    We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

  • 94.
    Arnekvist, Isac
    et al.
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Kragic, Danica
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    Stork, Johannes Andreas
    Robotics, Perception, and Learning lab, Royal Institute of Technology, Stockholm, Sweden.
    VPE: Variational Policy Embedding for Transfer Reinforcement Learning2018Manuscript (preprint) (Other academic)
  • 95.
    Aronsson, M.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Borgefors, G.
    2D Segmentation and Labelling of Clustered Ring-Shaped Objects2001Conference paper (Refereed)
    Abstract [en]

    A robust segmentation and labelling method to identify individual ring shaped

  • 96.
    Aronsson, M.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Fayyazi, A.
    Comparison of two different approaches for paper volume assembly2000In: Symposium on Image Analysis - SSAB 2000, 2000, p. 57-60Conference paper (Other scientific)
  • 97.
    Aronsson, M.
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Larsson, K.-A.
    Titta inuti papper -- Looking inside Paper2001In: Nordisk Papper och Massa, no 2, p. 44-45Article in journal (Other scientific)
  • 98.
    Aronsson, Mattias
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Estimating Fibre Twist and Aspect Ratios in 3D Voxel Volumes2002In: International Conference on Pattern Recognition (ICPR'02), 2002Conference paper (Refereed)
  • 99.
    Aronsson Mattias, Henningsson Olle, Sävborg Örjan
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Slice-based Digital Volume Assembly of a Small Paper Sample2002In: Nordic Pulp and Paper Research Journal, Vol. 17, no 1Article in journal (Refereed)
    Abstract [en]

    Digital volume images can be created by assembling a stack of 2D images. By using a microtome for slicing, a Scanning Electron Microscope for imaging and digital analysis tools, we were able to create a small digital volume from a paper sample of Duplex-b

  • 100.
    Aronsson Mattias, Sintorn Ida-Maria
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Ring Shaped Object Detector for Non-Isotropic 2D Images Using Optimized Distance Transform Weights2002Conference paper (Refereed)
    Abstract [en]

    A detector for finding ring shaped objects occurring in clus-ters in 2D images with non-isotropic pixel dimensions have been developed. The rings are characterized as having a closed border and a void interior. We assume that the thick-ness of the rings s

1234567 51 - 100 of 1956
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf