Ändra sökning
Avgränsa sökresultatet
2345678 201 - 250 av 679
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 201.
    Fagerström, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Spatio-Temporal Scale-Space Theory2011Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis addresses two important topics in developing a systematic space-time geometric approach to real-time, low-level motion vision. The first one concerns measuring of image flow, while the second one focuses on how to find low level features.

    We argue for studying motion vision in terms of space-time geometry rather than in terms of two (or a few) consecutive image frames. The use of Galilean Geometry and Galilean similarity geometry for this  purpose is motivated and relevant geometrical background is reviewed.

    In order to measure the visual signal in a way that respects the geometry of the situation and the causal nature of time, we argue that a time causal Galilean spatio-temporal scale-space is needed. The scale-space axioms are chosen so that they generalize popular axiomatizations of spatial scale-space to spatio-temporal  geometries.

    To be able to derive the scale-space, an infinitesimal framework for scale-spaces that respects a more general class of Lie groups (compared to previous theory) is developed and applied.

    Perhaps surprisingly, we find that with the chosen axiomatization, a time causal Galilean scale-space is not possible as an evolution process on space and time. However, it is possible on space and memory. We argue that this actually is a more accurate and realistic model of motion vision.

    While the derivation of the time causal Galilean spatio-temporal scale-spaces requires some exotic mathematics, the end result is as simple as one possibly could hope for and a natural extension of  spatial scale-spaces. The unique infinitesimally generated scale-space is an ordinary diffusion equation with drift on memory and a diffusion equation on space. The drift is used for velocity  adaption, the "velocity adaption" part of Galilean geometry (the Galilean boost) and the temporal scale-space acts as memory.

    Lifting the restriction of infinitesimally generated scale spaces, we arrive at a new family of scale-spaces. These are generated by a family of fractional differential evolution equations that generalize the ordinary diffusion equation. The same type of evolution equations have recently become popular in research in e.g. financial and physical modeling.

    The second major topic in this thesis is extraction of features from an image flow. A set of low-level features can be derived by classifying basic Galilean differential invariants. We proceed to derive invariants for two main cases: when the spatio-temporal  gradient cuts the image plane and when it is tangent to the image plane. The former case corresponds to isophote curve motion and the later to creation and disappearance of image structure, a case that is not well captured by the theory of optical flow.

    The Galilean differential invariants that are derived are equivalent with curl, divergence, deformation and acceleration. These  invariants are normally calculated in terms of optical flow, but here they are instead calculated directly from the the  spatio-temporal image.

  • 202.
    Fagerström, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Spatio-temporal Scale-Spaces2007Ingår i: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 4485, 326-337 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A family of spatio-temporal scale-spaces suitable for a moving observer is developed. The scale-spaces are required to be time causal for being usable for real time measurements, and to be velocity adapted , i.e. to have Galilean covariance to avoid favoring any particular motion. Furthermore standard scale-space axioms: linearity, positivity, continuity, translation invariance, scaling covariance in space and time, rotational invariance in space and recursivity are used. An infinitesimal criterion for scale-spaces is developed, which simplifies calculations and makes it possible to define scale spaces on bounded regions. We show that there are no temporally causal Galilean scale-spaces that are semigroups acting on space and time, but that there are such scale-spaces that are semigroups acting on space and memory (where the memory is the scale-space). The temporally causal scale-space is a time-recursive process using current input and the scale-space as state, i.e. there is no need for storing earlier input. The diffusion equation acting on the memory with the input signal as boundary condition, is a member of this family of scale spaces and is special in the sense that its generator is local.

  • 203.
    Fagerström, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Temporal Scale Spaces2005Ingår i: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 64, nr 2, 97-106 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we discuss how to define a scale space suitable for temporal measurements. We argue that such a temporal scale space should possess the properties of: temporal causality, linearity, continuity, positivity, recursitivity as well as translational and scaling covariance. It is shown that these requirements imply a one parameter family of convolution kernels. Furthermore it is shown that these measurements can be realized in a time recursive way, with the current data as input and the temporal scale space as state, i.e. there is no need for storing earlier input. This family of measurement processes contains the diffusion equation on the half line (that represents the temporal scale) with the input signal as boundary condition on the temporal axis. The diffusion equation is unique among the measurement processes in the sense that it is preserves positivity (in the scale domain) and is locally generated. A numerical scheme is developed and relations to other approaches are discussed.

  • 204.
    Fallon, Maurice F.
    et al.
    MIT.
    Johannsson, Hordur
    MIT.
    Kaess,, Michael
    MIT.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    McClelland, Hunter
    MIT.
    Englot, Brendan J.
    MIT.
    Hover, Franz S.
    MIT.
    Leonard, John J.
    MIT.
    Simultaneous Localization and Mapping in Marine Environments2013Ingår i: Marine Robot Autonomy, New York: Springer, 2013, 329-372 s.Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Accurate navigation is a fundamental requirement for robotic systems—marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely low drift rates, the localization accuracy decays monotonically with distance traveled. Other approaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well as its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV.

  • 205. Feix, Thomas
    et al.
    Romero, Javier
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Schmiedmayer, Heinz-Bodo
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A Metric for Comparing the Anthropomorphic Motion Capability of Artificial Hands2013Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, nr 1, 82-93 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose a metric for comparing the anthropomorphic motion capability of robotic and prosthetic hands. The metric is based on the evaluation of how many different postures or configurations a hand can perform by studying the reachable set of fingertip poses. To define a benchmark for comparison, we first generate data with human subjects based on an extensive grasp taxonomy. We then develop a methodology for comparison using generative, nonlinear dimensionality reduction techniques. We assess the performance of different hands with respect to the human hand and with respect to each other. The method can be used to compare other types of kinematic structures.

  • 206. Feix, Thomas
    et al.
    Romero, Javier
    Schmiedmayer, Heinz-Bodo
    Dollar, Aaron M.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    The GRASP Taxonomy of Human Grasp Types2016Ingår i: IEEE Transactions on Human-Machine Systems, ISSN 2168-2291, E-ISSN 2168-2305, Vol. 46, nr 1, 66-77 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed "The GRASP Taxonomy" after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human-computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.

  • 207. Ferri, Stefania
    et al.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Rizzolatti, Giacomo
    Orban, Guy
    Stereoscopically Observing Manipulative Actions2016Ingår i: Cerebral Cortex, ISSN 1047-3211, E-ISSN 1460-2199Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors “stimulus type” (action, static control, and dynamic control), “stereopsis” (present, absent) and “viewpoint” (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior.

  • 208. Fiorini, Paolo
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Education by competition2006Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 13, nr 3, 6-6 s.Artikel i tidskrift (Övrigt vetenskapligt)
  • 209. Fletcher, L.
    et al.
    Loy, Gareth
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Barnes, N.
    Zelinsky, A.
    Correlating driver gaze with the road scene for driver assistance systems2005Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, nr 1, 71-84 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A driver assistance system (DAS) should support the driver by monitoring road and vehicle events and presenting relevant and timely information to the driver. It is impossible to know what a driver is thinking, but we can monitor the driver's gaze direction and compare it with the position of information in the driver's viewfield to make inferences. In this way, not only do we monitor the driver's actions, we monitor the driver's observations as well. In this paper we present the automated detection and recognition of road signs, combined with the monitoring of the driver's response. We present a complete system that reads speed signs in real-time, compares the driver's gaze, and provides immediate feedback if it appears the sign has been missed by the driver.

  • 210.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Projection of a Markov Process with Neural Networks2001Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this work we have examined an application from the insurance industry. We first reformulate it into a problem of projecting a markov process. We then develop a method of carrying out the projection many steps into the future by using a combination of neural networks trained using a maximum entropy principle. This methodology improves on current industry standard solution in four key areas: variance, bias, confidence level estimation, and the use of inhomogeneous data. The neural network aspects of the methodology include the use of a generalization error estimate that does not rely on a validation set. We also develop our own approximation to the hessian matrix, which seems to be significantly better than assuming it to be diagonal and much faster than calculating it exactly. This hessian is used in the network pruning algorithm. The parameters of a conditional probability distribution were generated by a neural network, which was trained to maximize the log-likelihood plus a regularization term. In preparing the data for training the neural networks we have devised a scheme to decorrelate input dimensions completely, even non-linear correlations, which should be of general interest in its own right. The results we found indicate that the bias inherent in the current industry-standard projection technique is very significant. This work may be the only accurate measurement made of this important source of error.

  • 211.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Projection of a Markov Process with Neural Networks2001Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    In this work we have examined an application fromthe insurance industry. We first reformulate it into aproblem of projecting a markov process. We thendevelop a method of carrying out the projectionmany steps into the future by using a combination ofneural networks trained using a maximum entropyprinciple. This methodology improves on currentindustry standard solution in four key areas:variance, bias, confidence level estimation, and theuse of inhomogeneous data.The neural network aspects of the methodologyinclude the use of a generalization error estimate thatdoes not rely on a validation set. We also developour own approximation to the hessian matrix, whichseems to be significantly better than assuming it tobe diagonal and much faster than calculating itexactly. This hessian is used in the network pruningalgorithm. The parameters of a conditional probabilitydistribution were generated by a neuralnetwork, which was trained to maximize thelog-likelihood plus a regularization term.In preparing the data for training the neural networkswe have devised a scheme to decorrelate inputdimensions completely, even non-linear correlations,which should be of general interest in its own right.The results we found indicate that the bias inherentin the current industry-standard projection techniqueis very significant. This work may be the onlyaccurate measurement made of this important source of error.

  • 212.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Robustness of the Quadratic Antiparticle Filter forRobot Localization2011Ingår i: European Conference on Mobile Robots / [ed] Achim J. Lilienthal and Tom Duckett, 2011, 297-302 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robot localization using odometry and feature measurementsis a nonlinear estimation problem. An efficient solutionis found using the extended Kalman filter, EKF. The EKFhowever suffers from divergence and inconsistency when thenonlinearities are significant. We recently developed a new typeof filter based on an auxiliary variable Gaussian distributionwhich we call the antiparticle filter AF as an alternative nonlinearestimation filter that has improved consistency and stability. TheAF reduces to the iterative EKF, IEKF, when the posterior distributionis well represented by a simple Gaussian. It transitions to amore complex representation as required. We have implementedan example of the AF which uses a parameterization of the meanas a quadratic function of the auxiliary variables which we callthe quadratic antiparticle filter, QAF. We present simulationof robot feature based localization in which we examine therobustness to bias, and disturbances with comparison to the EKF.

  • 213.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    The Antiparticle Filter: an Adaptive Nonlinear Estimator2011Ingår i: International Symposium of Robotics Research, 2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when theuncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

  • 214.
    Folkesson, John
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Graphical SLAM using vision and the measurement subspace2005Ingår i: 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, IEEE conference proceedings, 2005, 325-330 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we combine a graphical approach for simultaneous localization and mapping, SLAM, with a feature representation that addresses symmetries and constraints in the feature coordinates, the measurement subspace, M-space. The graphical method has the advantages of delayed linearizations and soft commitment to feature measurement matching. It also allows large maps to be built up as a network of small local patches, star nodes. This local map net is then easier to work with. The formation of the star nodes is explicitly stable and invariant with all the symmetries of the original measurements. All linearization errors are kept small by using a local frame. The construction of this invariant star is made clearer by the M-space feature representation. The M-space allows the symmetries and constraints of the measurements to be explicitly represented. We present results using both vision and laser sensors.

  • 215.
    Folkesson, John
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Vision SLAM in the Measurement Subspace2005Ingår i: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4  Book Series, 2005, 30-35 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.

  • 216.
    Folkesson, John
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Leederkerken, Jacques
    MIT.
    Williams, Rob
    MIT.
    Patrikalakis, Andrew
    MIT.
    Leonard, John
    MIT.
    A Feature Based Navigation System for an Autonomous Underwater Robot2008Ingår i: Field And Service Robotics: Results Of The 6th International Conference / [ed] Laugier, C; Siegwart, R, Springer Berlin/Heidelberg, 2008, Vol. 42, 105-114 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a system for autonomous underwater navigation as implemented on a Nekton Ranger autonomous underwater vehicle, AUV. This is one of the first implementations of a practical application for simultaneous localization and mapping on an AUV. Besides being an application of real-time SLAM, the implemtation demonstrates a novel data fusion solution where data from 7 sources are fused at different time scales in 5 separate estimators. By modularizing the data fusion problem in this way each estimator can be tuned separately to provide output useful to the end goal of localizing the AUV, on an a priori map. The Ranger AUV is equipped with a BlueView blazed array sonar which is used to detect features in the underwater environment. Underwater testing results are presented. The features in these tests are deployed radar reflectors.

  • 217.
    Folkesson, John
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Leonard, John
    MIT.
    Autonomy through SLAM for an Underwater Robot2011Ingår i: Robotics Research The 14th International Symposium ISRR, Springer Berlin/Heidelberg, 2011, Vol. 70, 55-70 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    An autonomous underwater vehicle (AUV) is achieved that integrates state of the art simultaneous localization and mapping (SLAM) into the decision processes. This autonomy is used to carry out undersea target reacquisition missions that would otherwise be impossible with a low-cost platform. The AUV requires only simple sensors and operates without navigation equipment such as Doppler Velocity Log, inertial navigation or acoustic beacons. Demonstrations of the capability show that the vehicle can carry out the task in an ocean environment. The system includes a forward looking sonar and a set of simple vehicle sensors. The functionality includes feature tracking using a graphical square root smoothing SLAM algorithm, global localization using multiple EKF estimators, and knowledge adaptive mission execution. The global localization incorporates a unique robust matching criteria which utilizes both positive and negative information. Separate match hypotheses are maintained by each EKF estimator allowing all matching decisions to be reversible.

  • 218.
    Folkesson, John
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Leonard, John
    MIT.
    Leederkerken, Jacques
    MIT.
    Williams, Rob
    MIT.
    Feature tracking for underwater navigation using sonar2007Ingår i: Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007: Vols 1-9, IEEE conference proceedings, 2007, 3678-3684 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Tracking sonar features in real time on an underwater robot is a challenging task. One reason is the low observability of the sonar in some directions. For example, using a blazed array sonar one observes range and the angle to the array axis with fair precision. The angle around the axis is poorly constrained. This situation is problematic for tracking features in world frame Cartesian coordinates as the error surfaces will not be ellipsoids. Thus Gaussian tracking of the features will not work properly. The situation is similar to the problem of tracking features in camera images. There the unconstrained direction is depth and its errors are highly non-Gaussian. We propose a solution to the sonar problem that is analogous to the successful inverse depth feature parameterization for vision tracking, introduced by [1]. We parameterize the features by the robot pose where it was first seen and the range/bearing from that pose. Thus the 3D features have 9 parameters that specify their world coordinates. We use a nonlinear transformation on the poorly observed bearing angle to give a more accurate Gaussian approximation to the uncertainty. These features are tracked in a SLAM framework until there is enough information to initialize world frame Cartesian coordinates for them. The more compact representation can then be used for a global SLAM or localization purposes. We present results for a system running real time underwater SLAM/localization. These results show that the parameterization leads to greater consistency in the feature location estimates.

  • 219. Frintrop, S.
    et al.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, H.
    Simultaneous robot localization and mapping based on a visual attention system2007Ingår i: Attention in Cognitive Systems: Theories and Systems from an Interdisciplinary Viewpoint, 2007, 417-430 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Visual attention regions are useful for many applications in the field of computer vision and robotics. Here, we introduce an application to simultaneous robot localization and mapping. A biologically motivated attention system finds regions of interest which serve as visual landmarks for the robot. The regions are tracked and matched over consecutive frames to build stable landmarks and to estimate the 3D position of the landmarks in the environment. Matching of current landmarks to database entries enables loop closing and global localization. Additionally, the system is equipped with an active camera control, which supports the system with a tracking, a re-detection, and an exploration behaviour. We present experiments which show the applicability of the system in a real-world scenario. A comparison between the system operating in active and in passive mode shows the advantage of active camera control: we achieve a better distribution of landmarks as well as a faster and more reliable loop closing.

  • 220.
    Frintrop, Simone
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    VOCUS: A visual attention system for object detection and goal-directed search2006Ingår i: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, 1-228 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Visual attention is a mechanism in human perception which selects relevant regions from a scene and provides these regions for higher-level processing as object recognition. This enables humans to act effectively in their environment despite the complexity of perceivable sensor data. Computational vision systems face the same problem as humans: there is a large amount of information to be processed and to achieve this efficiently, maybe even in real-time for robotic applications, the order in which a scene is investigated must be determined in an intelligent way. A promising approach is to use computational attention systems that simulate human visual attention. This monograph introduces the biologically motivated computational attention system VOCUS (Visual Object detection with a Computational attention System) that detects regions of interest in images. It operates in two modes, in an exploration mode in which no task is provided, and in a search mode with a specified target. In exploration mode, regions of interest are defined by strong contrasts (e.g., color or intensity contrasts) and by the uniqueness of a feature. For example, a black sheep is salient in a flock of white sheep. In search mode, the system uses previously learned information about a target object to bias the saliency computations with respect to the target. In various experiments, it is shown that the target is on average found with less than three fixations, that usually less than five training images suffice to learn the target information, and that the system is mostly robust with regard to viewpoint changes and illumination variances. Furthermore, we demonstrate how VOCUS profits from additional sensor data: we apply the system to depth and reflectance data from a 3D laser scanner and show the advantages that the laser modes provide. By fusing the data of both modes, we demonstrate how the system is able to consider distinct object properties and how the flexibility of the system increases by considering different data. Finally, the regions of interest provided by VOCUS serve as input to a classifier that recognizes the object in the detected region. We show how and in which cases the classification is sped up and how the detection quality is improved by the attentional front-end. This approach is especially useful if many object classes have to be considered, a frequently occurring situation in robotics. VOCUS provides a powerful approach to improve existing vision systems by concentrating computational resources to regions that are more likely to contain relevant information. The more the complexity and power of vision systems increase in the future, the more they will profit from an attentional front-end like VOCUS.

  • 221. Frintrop, Simone
    et al.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active Gaze Control for Attentional Visual SLAM2008Ingår i: 2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 2008, 3690-3697 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we introduce an approach to active camera control for visual SLAM. Features, detected by a biologically motivated attention system, are tracked over several frames to determine stable landmarks. Matching of features to database entries enables global loop closing. The focus of this paper is the active camera control module, which supports the system with three behaviours: (i) A tracking behaviour tracks promising landmarks and prevents them from leaving the field of view, (ii) A redetection behaviour directs the camera actively to regions where landmarks are expected and thus supports loop closing, (iii) Finally, an exploration behaviour investigates regions without landmarks and enables a more uniform distribution of landmarks. Several real-world experiments show that the active camera control outperforms the passive system considerably.

  • 222. Frintrop, Simone
    et al.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Attentional Landmarks and Active Gaze Control for Visual SLAM2008Ingår i: IEEE Transactions on Robotics, special issue on visual SLAM, ISSN 1552-3098, Vol. 24, nr 5, 1054-1065 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper is centered around landmark detection, tracking, and matching for visual simultaneous localization and mapping using a monocular vision system with active gaze control. We present a system that specializes in creating and maintaining a sparse set of landmarks based on a biologically motivated feature-selection strategy. A visual attention system detects salient features that are highly discriminative and ideal candidates for visual landmarks that are easy to redetect. Features are tracked over several frames to determine stable landmarks and to estimate their 3-D position in the environment. Matching of current landmarks to database entries enables loop closing. Active gaze control allows us to overcome some of the limitations of using a monocular vision system with a relatively small field of view. It supports 1) the tracking of landmarks that enable a better pose estimation, 2) the exploration of regions without landmarks to obtain a better distribution of landmarks in the environment, and 3) the active redetection of landmarks to enable loop closing in situations in which a fixed camera fails to close the loop. Several real-world experiments show that accurate pose estimation is obtained with the presented system and that active camera control outperforms the passive approach.

  • 223.
    Frintrop, Simone
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Attentional landmark selection for visual SLAM2006Ingår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, 2582-2587 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we introduce a new method to automatically detect useful landmarks for visual SLAM. A biologically motivated attention system detects regions of interest which "pop-out" automatically due to strong contrasts and the uniqueness of features. This property makes the regions easily redetectable and thus they are useful candidates for visual landmarks. Matching based on scene prediction and feature similarity allows not only short-term tracking of the regions, but also redetection in loop closing situations. The paper demonstrates how regions are determined and how they are matched reliably. Various experimental results on real-world data show that the landmarks are useful with respect to be tracked in consecutive frames and to enable closing loops.

  • 224.
    Frintrop, Simone
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pay attention when selecting features2006Ingår i: 18th International Conference on Pattern Recognition, Vol 2, Proceedings / [ed] Tang, YY; Wang, SP; Lorette, G; Yeung, DS; Yan, H, 2006, 163-166 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose anew, hierarchical approach to landmark selection for simultaneous robot localization and mapping based on visual sensors: a biologically motivated attention system finds salient regions of interest (ROIs) in images, and within these regions, Harris corners are detected. This combines the advantages of the ROIs (reducing complexity, enabling good redetactability of regions) with the advantages of the Harris corners (high stability). Reducing complexity is important to meet real-time requirements and stability of features is essential to compute the depth of landmarks from structure from motion with a small baseline. We show that the number of landmarks is highly reduced compared to all Harris corners while maintaining the stability of features for the mapping task.

  • 225. Fritz, M.
    et al.
    Leibe, B.
    Caputo, Barbara
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Schiele, B.
    Integrating representative and discriminant models for object category detection2005Ingår i: Tenth IEEE International Conference on  (Volume:2 ) Computer Vision, 2005. ICCV 2005, IEEE Computer Society, 2005, 1363-1370 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Category detection is a lively area of research. While categorization algorithms tend to agree in using local descriptors, they differ in the choice of the classifier, with some using generative models and others discriminative approaches. This paper presents a method for object category detection which integrates a generative model with a discriminative classifier. For each object category, we generate an appearance codebook, which becomes a common vocabulary for the generative and discriminative methods. Given a query image, the generative part of the algorithm finds a set of hypotheses and estimates their support in location and scale. Then, the discriminative part verifies each hypothesis on the same codebook activations. The new algorithm exploits the strengths of both original methods, minimizing their weaknesses. Experiments on several databases show that our new approach performs better than its building blocks taken separately. Moreover, experiments on two challenging multi-scale databases show that our new algorithm outperforms previously reported results.

  • 226.
    Fukui, Kazuhiro
    et al.
    Tsukuba University, Japan.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Difference subspace and its generalization for subspace-based methods2015Ingår i: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 37, nr 11, 2164-2177 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Subspace-based methods are known to provide a practical solution for image set-based object recognition. Based on the insight that local shape differences between objects offer a sensitive cue for recognition, this paper addresses the problem of extracting a subspace representing the difference components between class subspaces generated from each set of object images independently of each other. We first introduce the difference subspace (DS), a novel geometric concept between two subspaces as an extension of a difference vector between two vectors, and describe its effectiveness in analyzing shape differences. We then generalize it to the generalized difference subspace (GDS) for multi-class subspaces, and show the benefit of applying this to subspace and mutual subspace methods, in terms of recognition capability. Furthermore, we extend these methods to kernel DS (KDS) and kernel GDS (KGDS) by a nonlinear kernel mapping to deal with cases involving larger changes in viewing direction. In summary, the contributions of this paper are as follows: 1) a DS/KDS between two class subspaces characterizes shape differences between the two respectively corresponding objects, 2) the projection of an input vector onto a DS/KDS realizes selective visualization of shape differences between objects, and 3) the projection of an input vector or subspace onto a GDS/KGDS is extremely effective at extracting differences between multiple subspaces, and therefore improves object recognition performance. We demonstrate validity through shape analysis on synthetic and real images of 3D objects as well as extensive comparison of performance on classification tests with several related methods; we study the performance in face image classification on the Yale face database B+ and the CMU Multi-PIE database, and hand shape classification of multi-view images.

  • 227. Förell, Erik
    et al.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Robotsystem och förfarande för behandling av en yta2003Patent (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

    Robot system including at least one mobile robot (10), for treating a surface, which comprises map storage means to store a map of the surface to be treated and means to navigate the, or each, mobile robot (10) to at least one point on a surface. The, or each, mobile robot (10) comprises locating means (13,14) to identify its position with respect to the surface to be treated and means t o automatically deviate the mobile robot (10) away from its initial path in the event that an obstacle is detected along its path. The, or each, mobile robot (10) also comprises means to store and/or communicate data concerning the surface treatment performed and any obstacles detected by the locating means (13,14).

  • 228. Geidenstam, Sebastian
    et al.
    Huebner, K
    Banksell, Daniel
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning of 2D grasping strategies from box-based 3D object approximations2010Ingår i: Robotics: Science and Systems, MIT Press, 2010, 9-16 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we bridge and extend the approaches of 3D shape approximation and 2D grasping strategies. We begin by applying a shape decomposition to an object, i.e. its extracted 3D point data, using a flexible hierarchy of minimum volume bounding boxes. From this representation, we use the projections of points onto each of the valid faces as a basis for finding planar grasps. These grasp hypotheses are evaluated using a set of 2D and 3D heuristic quality measures. Finally on this set of quality measures, we use a neural network to learn good grasps and the relevance of each quality measure for a good grasp. We test and evaluate the algorithm in the GraspIt! simulator.

  • 229.
    Geronimo, David
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Unsupervised surveillance video retrieval based on human action and appearance2014Ingår i: Proceedings - International Conference on Pattern Recognition, LOS ALAMITOS: IEEE Computer Society, 2014, 4630-4635 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Forensic video analysis is the offline analysis of video aimed at understanding what happened in a scene in the past. Two of its key tasks are the recognition of specific actions, e.g., walking or fighting, and the search for specific persons, also referred to as re-identification. Although these tasks have traditionally been performed manually in forensic investigations, the current growing number of cameras and recorded video leads to the need for automated analysis. In this paper we propose an unsupervised retrieval system for surveillance videos based on human action and appearance. Given a query window, the system retrieves people performing the same action as the one in the query, the same person performing any action, or the same person performing the same action. We use an adaptive search algorithm that focuses the analysis on relevant frames based on the inter-frame difference of foreground masks. Then, for each analyzed frame, a pedestrian detector is used to extract windows containing each pedestrian in the scene. For each detection, we use optical flow features to represent its action and color features to represent its appearance. These extracted features are used to compute the probability that the detection matches the query according to the specified criterion. The algorithm is fully unsupervised, i.e., no training or constraints on the appearance, actions or number of actions that will appear in the test video are made. The proposed algorithm is tested on a surveillance video with different people performing different actions, providing satisfactory retrieval performance.

  • 230.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bütepage, Judith
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Self-learning and adaptation in a sensorimotor framework2016Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, 551-558 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a general framework to autonomously achieve the task of finding a sequence of actions that result in a desired state. Autonomy is acquired by learning sensorimotor patterns of a robot, while it is interacting with its environment. Gaussian processes (GP) with automatic relevance determination are used to learn the sensorimotor mapping. In this way, relevant sensory and motor components can be systematically found in high-dimensional sensory and motor spaces. We propose an incremental GP learning strategy, which discerns between situations, when an update or an adaptation must be implemented. The Rapidly exploring Random Tree (RRT∗) algorithm is exploited to enable long-term planning and generating a sequence of states that lead to a given goal; while a gradient-based search finds the optimum action to steer to a neighbouring state in a single time step. Our experimental results prove the suitability of the proposed framework to learn a joint space controller with high data dimensions (10×15). It demonstrates short training phase (less than 12 seconds), real-time performance and rapid adaptations capabilities.

  • 231.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bütepage, Judith
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A sensorimotor reinforcement learning framework for physical human-robot interaction2016Ingår i: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, 2682-2688 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an actionvalue function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.

  • 232.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kootstra, Gert
    Wageningen University, The Netherlands.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning visual forward models to compensate for self-induced image motion2014Ingår i: 23rd IEEE International Conference on Robot and Human Interactive Communication: IEEE RO-MAN, IEEE , 2014, 1110-1115 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex action spaces. In this work, we propose to learn the visual consequences of changes in pan and tilt of a robotic head using a visual forward model based on Gaussian processes and SURF correspondences. This is done without any assumptions on the kinematics of the system or requirements on calibration. The proposed method is compared to an earlier work using accumulator-based correspondences and Radial Basis function networks. We also show the feasibility of the proposed method for detection of independent motion using a moving camera system. By comparing the predicted and actual captured images, image motion due to the robot's own actions and motion caused by moving external objects can be distinguished. Results show the proposed method to be preferable from the earlier method in terms of both prediction errors and ability to detect independent motion.

  • 233.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A Sensorimotor Approach for Self-Learning of Hand-Eye Coordination2015Ingår i: IEEE/RSJ International Conference onIntelligent Robots and Systems, Hamburg, September 28 - October 02, 2015, IEEE conference proceedings, 2015, 4969-4975 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a sensorimotor contingencies (SMC) based method to fully autonomously learn to perform hand-eye coordination. We divide the task into two visuomotor subtasks, visual fixation and reaching, and implement these on a PR2 robot assuming no prior information on its kinematic model. Our contributions are three-fold: i) grounding a robot in the environment by exploiting SMCs in the action planning system, which eliminates the need for prior knowledge of the kinematic or dynamic models of the robot; ii) using a forward model to search for proper actions to solve the task by minimizing a cost function, instead of training a separate inverse model, to speed up training; iii) encoding 3D spatial positions of a target object based on the robot’s joint positions, thus avoiding calibration with respect to an external coordinate system. The method is capable of learning the task of hand-eye coordination from scratch by less than 20 sensory-motor pairs that are iteratively generated at real-time speed. In order to examine the robustness of the method while dealing with nonlinear image distortions, we apply a so-called retinal mapping image deformation to the input images. Experimental results show the successfulness of the method even under considerable image deformations.

  • 234.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Scene Representation and Object Grasping Using Active Vision2010Konferensbidrag (Refereegranskat)
  • 235.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Visual servoing on unknown objects2012Ingår i: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 22, nr 4, 423-435 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.

  • 236.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danic
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Virtual Visual Servoing for Real-Time Robot Pose Estimation2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a system for markerless pose estimation and tracking of a robot manipulator. By tracking the manipulator, we can obtain an accurate estimate of its position and orientation necessary in many object grasping and manipulation tasks. Tracking the manipulator allows also for better collision avoidance. The method is based on the notion of virtual visual servoing. We also propose the use of distance transform in the control loop, which makes the performance independent of the feature search window.

  • 237.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating 3D features and virtual visual servoing for hand-eye and humanoid robot pose estimation2015Ingår i: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, nr February, 240-245 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose an approach for vision-based pose estimation of a robot hand or full-body pose. The method is based on virtual visual servoing using a CAD model of the robot and it combines 2-D image features with depth features. The method can be applied to estimate either the pose of a robot hand or pose of the whole body given that its joint configuration is known. We present experimental results that show the performance of the approach as demonstrated on both a mobile humanoid robot and a stationary manipulator.

  • 238.
    Guo, Meng
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Tumova, Jana
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dimarogonas, Dino V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Hybrid control of multi-agent systems under local temporal tasks and relative-distance constraints2016Ingår i: Proceedings of the IEEE Conference on Decision and Control, IEEE conference proceedings, 2016, 1701-1706 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a distributed hybrid control strategy for multi-agent systems where each agent has a local task specified as a Linear Temporal Logic (LTL) formula and at the same time is subject to relative-distance constraints with its neighboring agents. The local tasks capture the temporal requirements on individual agents' behaviors, while the relative-distance constraints impose requirements on the collective motion of the whole team. The proposed solution relies only on relative-state measurements among the neighboring agents without the need for explicit information exchange. It is guaranteed that the local tasks given as syntactically co-safe or general LTL formulas are fulfilled and the relative-distance constraints are satisfied at all time. The approach is demonstrated with computer simulations.

  • 239.
    Gálvez del Postigo Fernández, Carlos
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Grid-Based Multi-Sensor Fusion for On-Road Obstacle Detection: Application to Autonomous Driving2015Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system.

    The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management.

    The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors. 

  • 240.
    Gálvez López, Dorian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Paul, Chandana
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hybrid Laser and Vision Based Object Search and Localization2008Ingår i: 2008 IEEE International Conference on Robotics and Automation: Vols 1-9, 2008, 2636-2643 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe a method for an autonomous robot to efficiently locate one or more distinct objects in a realistic environment using monocular vision. We demonstrate how to efficiently subdivide acquired images into interest regions for the robot to zoom in on, using receptive field cooccurrence histograms. Objects are recognized through SIFT feature matching and the positions of the objects are estimated. Assuming a 2D map of the robot's surroundings and a set of navigation nodes between which it is free to move, we show how to compute an efficient sensing plan that allows the robot's camera to cover the environment, while obeying restrictions on the different objects' maximum and minimum viewing distances. The approach has been implemented on a real robotic system and results are presented showing its practicability and the quality of the position estimates obtained.

  • 241.
    Gårding, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    CanApp: The Candela Application Library1989Rapport (Övrigt vetenskapligt)
    Abstract [en]

    This paper describes CanApp, the Candela Application Library. CanApp is a software package for image processing and image analysis. Most of the subroutines in CanApp are available both as stand-alone programs and C subroutines.

    CanApp currently comprises some 50 programs and 75 subroutines, and these numbers are expected to grow continuously as a result of joint efforts of the members of the CVAP group at the Royal Institute of Technology in Stockholm.

    CanApp is currently installed and running under UNIX on Sun workstations

  • 242.
    Gårding, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Direct computation of shape cues using scale-adapted spatial derivative operators1996Ingår i: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 17, nr 2, 163-191 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper addresses the problem of computing cues to the three-dimensional structure of surfaces in the world directly from the local structure of the brightness pattern of either a single monocular image or a binocular image pair.It is shown that starting from Gaussian derivatives of order up to two at a range of scales in scale-space, local estimates of (i) surface orientation from monocular texture foreshortening, (ii) surface orientation from monocular texture gradients, and (iii) surface orientation from the binocular disparity gradient can be computed without iteration or search, and by using essentially the same basic mechanism.The methodology is based on a multi-scale descriptor of image structure called the windowed second moment matrix, which is computed with adaptive selection of both scale levels and spatial positions. Notably, this descriptor comprises two scale parameters; a local scale parameter describing the amount of smoothing used in derivative computations, and an integration scale parameter determining over how large a region in space the statistics of regional descriptors is accumulated.Experimental results for both synthetic and natural images are presented, and the relation with models of biological vision is briefly discussed.

  • 243.
    Gårding, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Direct estimation of local surface shape in a fixating binocular vision system1994Ingår i: Computer Vision — ECCV '94: Third European Conference on Computer Vision Stockholm, Sweden, May 2–6, 1994 Proceedings, Volume I, Springer Berlin/Heidelberg, 1994, 365-376 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper addresses the problem of computing cues to the three-dimensional structure of surfaces in the world directly from the local structure of the brightness pattern of a binocular image pair. The geometric information content of the gradient of binocular disparity is analyzed for the general case of a fixating vision system with symmetric or asymmetric vergence, and with either known or unknown viewing geometry. A computationally inexpensive technique which exploits this analysis is proposed. This technique allows a local estimate of surface orientation to be computed directly from the local statistics of the left and right image brightness gradients, without iterations or search. The viability of the approach is demonstrated with experimental results for both synthetic and natural gray-level images.

  • 244.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kristoffer, Sjöö
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012Ingår i: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 245.
    Göransson, Rasmus
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Aydemir, A.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kinect@home: A crowdsourced RGB-D dataset2016Ingår i: 13th International Conference on Intelligent Autonomous Systems, IAS 2014, Springer, 2016, Vol. 302, 843-858 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Algorithms for 3D localization, mapping, and reconstruction are getting increasingly mature. It is time to also make the datasets on which they are tested more realistic to reflect the conditions in the homes of real people. Today algorithms are tested on data gathered in the lab or at best in a few places, and almost always by the people that designed the algorithm. In this paper, we present the first RGB-D dataset from the crowd sourced data collection project Kinect@Home and perform an initial analysis of it. The dataset contains 54 recordings with a total of approximately 45 min of RGB-D video. We present a comparison of two different pose estimation methods, the Kinfu algorithm and a key point-based method, to show how this dataset can be used even though it is lacking ground truth. In addition, the analysis highlights the different characteristics and error modes of the two methods and shows how challenging data from the real world is.

  • 246.
    Güler, Püren
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gratal, Xavi
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pauwels, Karl
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    What's in the Container?: Classifying Object Contents from Vision and Touch2014Ingår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems  (IROS 2014), IEEE , 2014, 3961-3968 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.

  • 247.
    Güler, Rezan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för bioteknologi (BIO), Proteinteknologi.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pieropan, Alessandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Estimating the Deformability of Elastic Materials using Optical Flow and Position-based Dynamics2015Ingår i: Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, IEEE conference proceedings, 2015, 965-971 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Knowledge of the physical properties of objects is essential in a wide range of robotic manipulation scenarios. A robot may not always be aware of such properties prior to interaction. If an object is incorrectly assumed to be rigid, it may exhibit unpredictable behavior when grasped. In this paper, we use vision based observation of the behavior of an object a robot is interacting with and use it as the basis for estimation of its elastic deformability. This is estimated in a local region around the interaction point using a physics simulator. We use optical flow to estimate the parameters of a position-based dynamics simulation using meshless shape matching (MSM). MSM has been widely used in computer graphics due to its computational efficiency, which is also important for closed-loop control in robotics. In a controlled experiment we demonstrate that our method can qualitatively estimate the physical properties of objects with different degrees of deformability.

  • 248.
    Hamid Muhammed, Hamed
    et al.
    KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik.
    Bergholm, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sensitivity Analysis of Multichannel Images Intended for Instantaneous Imaging Spectrometry Applications2010Ingår i: SIAM Journal on Imaging Sciences, ISSN 1936-4954, E-ISSN 1936-4954, Vol. 3, nr 1, 79-109 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a sensitivity analysis of using instantaneous multichannel two-dimensional (2D) imaging to achieve instantaneous 2D imaging spectroscopy. A simulated multiple-filter mosaic was introduced and used to acquire multichannel data which were transformed into spectra. The feasibility of two different transformation approaches (the concrete pseudoinverse approach and a statistical approach) was investigated through extensive experimental tasks. A promising statistical method was identified to be used for accurate estimation of spectra from multichannel data. Comparison between estimated and measured spectra shows that higher estimation accuracy can be achieved when using a larger number of usable multiple-filter combinations in the mosaic.

  • 249.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. CVAP/CAS/CSC, KTH Royal Institute of Technology.
    Dexterous Grasping: Representation and Optimization2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Many robot object interactions require that an object is firmly held, and that the grasp remains stable during the whole manipulation process. Based on grasp wrench space, this thesis address the problems of measuring the grasp sensitivity against friction changes, planning contacts and hand configurations on mesh and point cloud representations of arbitrary objects, planning adaptable grasps and finger gaiting for keeping a grasp stable under various external disturbances, as well as learning of grasping manifolds for more accurate reachability and inverse kinematics computation for multifingered grasping. 

    Firstly, we propose a new concept called friction sensitivity, which measures how susceptible a specific grasp is to changes in the underlying frictionc oefficients. We develop algorithms for the synthesis of stable grasps with low friction sensitivity and for the synthesis of stable grasps in the case of small friction coefficients.  

    Secondly, for fast planning of contacts and hand configurations for dexterous grasping, as well as keeping the stability of a grasp during execution, we present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. 

    Lastly, to improve the efficiency and accuracy of dexterous grasping and in-hand manipulation, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution.

  • 250.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Haustein, Joshua
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016Ingår i: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, 2022-2029 s., 7487349Konferensbidrag (Refereegranskat)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

2345678 201 - 250 av 679
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf