Ändra sökning
Avgränsa sökresultatet
1234567 151 - 200 av 679
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 151. Damangir, Soheil
    et al.
    Manzouri, Amirhossein
    Oppedal, Ketil
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Firbank, Michael J.
    Sonnesyn, Hogne
    Tysnes, Ole-Bjorn
    O'Brien, John T.
    Beyer, Mona K.
    Westman, Eric
    Aarsland, Dag
    Wahlund, Lars-Olof
    Spulber, Gabriela
    Multispectral MRI segmentation of age related white matter changes using a cascade of support vector machines2012Ingår i: Journal of the Neurological Sciences, ISSN 0022-510X, E-ISSN 1878-5883, Vol. 322, nr 1-2, 211-216 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    White matter changes (WMC) are the focus of intensive research and have been linked to cognitive impairment and depression in the elderly. Cumbersome manual outlining procedures make research on WMC labor intensive and prone to subjective bias. We present a fast, fully automated method for WMC segmentation using a cascade of reduced support vector machines (SVMs) with active learning. Data of 102 subjects was used in this study. Two MRI sequences (T1-weighted and FLAIR) and masks of manually outlined WMC from each subject were used for the image analysis. The segmentation framework comprises pre-processing, classification (training and core segmentation) and post-processing. After pre-processing, the model was trained on two subjects and tested on the remaining 100 subjects. The effectiveness and robustness of the classification was assessed using the receiver operating curve technique. The cascade of SVMs segmentation framework outputted accurate results with high sensitivity (90%) and specificity (99.5%) values, with the manually outlined WMC as reference. An algorithm for the segmentation of WMC is proposed. This is a completely competitive and fast automatic segmentation framework, capable of using different input sequences, without changes or restrictions of the image analysis algorithm.

  • 152. Damianou, A. C.
    et al.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Titsias, M. K.
    Lawrence, N. D.
    Manifold relevance determination2012Ingår i: Proceedings of the 29th International Conference on Machine Learning, ICML 2012, 2012, 145-152 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a fully Bayesian latent variable model which exploits conditional non-linear (in)-dependence structures to learn an efficient latent representation. The latent space is factorized to represent shared and private information from multiple views of the data. In contrast to previous approaches, we introduce a relaxation to the discrete segmentation and allow for a "softly" shared latent space. Further, Bayesian techniques allow us to automatically estimate the dimensionality of the latent spaces. The model is capable of capturing structure underlying extremely high dimensional spaces. This is illustrated by modelling unprocessed images with tenths of thousands of pixels. This also allows us to directly generate novel images from the trained model by sampling from the discovered latent spaces. We also demonstrate the model by prediction of human pose in an ambiguous setting. Our Bayesian framework allows us to perform disambiguation in a principled manner by including latent space priors which incorporate the dynamic nature of the data.

  • 153. Damianou, Andreas
    et al.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Boorman, Luke
    Lawrence, Neil D.
    Prescott, Tony J.
    A Top-Down Approach for a Synthetic Autobiographical Memory System2015Ingår i: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015, Springer, 2015, 280-292 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autobiographical memory (AM) refers to the organisation of one's experience into a coherent narrative. The exact neural mechanisms responsible for the manifestation of AM in humans are unknown. On the other hand, the field of psychology has provided us with useful understanding about the functionality of a bio-inspired synthetic AM (SAM) system, in a higher level of description. This paper is concerned with a top-down approach to SAM, where known components and organisation guide the architecture but the unknown details of each module are abstracted. By using Bayesian latent variable models we obtain a transparent SAM system with which we can interact in a structured way. This allows us to reveal the properties of specific sub-modules and map them to functionality observed in biological systems. The top-down approach can cope well with the high performance requirements of a bio-inspired cognitive system. This is demonstrated in experiments using faces data.

  • 154. Danafar, Somayeh
    et al.
    Sheikh, Leila Taghavi
    Targhi, Alireza Tavakoli
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A method for eye detection based on SVD transforms2006Ingår i: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 16, nr 5, 222-229 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A set of transforms (SVD transforms) were introduced in (Shahshahani and Tavakoli Targhi) for understanding images. These transforms have been applied to some problems in computer vision including segmentation, detection of objects in a texture environment, classification of textures, detection of cracks or other imperfections, etc, This technique is shown to be applicable to determination of the location of eyes in a facial image. This method makes no use of color cues, prior geometric knowledge or other assumptions and does not require training. It is also insensitive to local perturbations in lighting, change of orientation and pose, scaling, and complexity of the background including indoor and outdoor environments. The method can be used for eye tracking and has applications to face recognition. It has also been used in animal eye detection and differentiation.

  • 155.
    Danielsson, Oscar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Shape-based Representations and Boosting for Visual Object Class Detection: Models and methods for representaion and detection in single and multiple views2011Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Detection of generic visual object classes (i.e. cars, dogs, mugs or people) in images is a task that humans are able to solve with remarkable ease. Unfortunately this has proven a very challenging task for computer vision. Thereason is that different instances of the same class may look very different, i.e. there is a high intra-class variation. There are several causes for intra-class variation; for example (1) the imaging conditions (e.g. lighting and exposure) may change, (2) different objects of the same class typically differ in shape and appearance, (3) the position of the object relative to the camera (i.e. the viewpoint) may change and (4) some objects are articulate and may change pose. In addition the background class, i.e. everything but the target object class, is very large. It is the combination of very high intra-class variation with a large background class that makes generic object class detection difficult.

    This thesis addresses this challenge within the AdaBoost framework. AdaBoost constructs an ensemble of weak classifiers to solve a given classification task and allows great flexibility in the design of these weak classifiers. This thesis proposes several types of weak classifiers that specifically target some of the causes of high intra-class variation. A multi-local classifier is proposed to capture global shape properties for object classes that lack discriminative local features, projectable classifiers are proposed to handle detection from multiple viewpoints and finally gated classifiers are proposed as a generic way to handle high intra-class variation in combination with a large background class.

    All proposed weak classifiers are evaluated on standard datasets to allow performance comparison to other related methods.

  • 156.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Generic Object Class Detection using Boosted Configurations of Oriented Edges2010Ingår i: Computer Vision – ACCV 2010 / [ed] Kimmel, R; Klette, R; Sugimoto, A, Springer Berlin/Heidelberg, 2010, 1-14 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we introduce a new representation for shape-based object class detection. This representation is based on very sparse and slightly flexible configurations of oriented edges. An ensemble of such configurations is learnt in a boosting framework. Each edge configuration can capture some local or global shape property of the target class and the representation is thus not limited to representing and detecting visual classes that have distinctive local structures. The representation is also able to handle significant intra-class variation. The representation allows for very efficient detection and can be learnt automatically from weakly labelled training images of the target class. The main drawback of the method is that, since its inductive bias is rather weak, it needs a comparatively large training set. We evaluate on a standard database [1] and when using a slightly extended training set, our method outperforms state of the art [2] on four out of five classes.

  • 157.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Generic Object Class Detection using Feature Maps2011Ingår i: Proceedings of Scandinavian Conference on Image Analysis, 2011, 348-359 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we describe an object class model and a detection scheme based on feature maps, i.e. binary images indicating occurrences of various local features. Any type of local feature and any number of features can be used to generate feature maps. The choice of which features to use can thus be adapted to the task at hand, without changing the general framework. An object class is represented by a boosted decision tree classifier (which may be cascaded) based on normalized distances to feature occurrences. The resulting object class model is essentially a linear combination of a set of flexible configurations of the features used. Within this framework we present an efficient detection scheme that uses a hierarchical search strategy. We demonstrate experimentally that this detection scheme yields a significant speedup compared to sliding window search. We evaluate the detection performance on a standard dataset [7], showing state of the art results. Features used in this paper include edges, corners, blobs and interest points.

  • 158.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Projectable Classifiers for Multi-View Object Class Recognition2011Ingår i: 3rd International IEEE Workshop on 3D Representation and Recognition, 2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a multi-view object class modeling framework based on a simplified camera model and surfels (defined by a location and normal direction in a normalized 3D coordinate system) that mediate coarse correspondences between different views. Weak classifiers are learnt relative to the reference frames provided by the surfels. We describe a weak classifier that uses contour information when its corresponding surfel projects to a contour element in the image and color information when the face of the surfel is visible in the image. We emphasize that these weak classifiers can possibly take many different forms and use many different image features. Weak classifiers are combined using AdaBoost. We evaluate the method on a public dataset [8], showing promising results on categorization, recognition/detection, pose estimation and image synthesis.

  • 159.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sullivan, Josephine
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Automatic Learning and Extraction of Multi-Local Features2009Ingår i: Proceedings of the IEEE International Conference on Computer Vision, 2009, 917-924 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we introduce a new kind of feature - the multi-local feature, so named as each one is a collection of local features, such as oriented edgels, in a very specific spatial arrangement. A multi-local feature has the ability to capture underlying constant shape properties of exemplars from an object class. Thus it is particularly suited to representing and detecting visual classes that lack distinctive local structures and are mainly defined by their global shape. We present algorithms to automatically learn an ensemble of these features to represent an object class from weakly labelled training images of that class, as well as procedures to detect these features efficiently in novel images. The power of multi-local features is demonstrated by using the ensemble in a simple voting scheme to perform object category detection on a standard database. Despite its simplicity, this scheme yields detection rates matching state-of-the-art object detection systems.

  • 160.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sullivan, Josephine
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Object Detection using Multi-Local Feature Manifolds2008Ingår i: Proceedings - Digital Image Computing: Techniques and Applications, DICTA 2008, 2008, 612-618 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Many object categories are better characterized by the shape of their contour than by local appearance properties like texture or color. Multi-local features are designed in order to capture the global discriminative structure of an object while at the same time avoiding the drawbacks with traditional global descriptors such as sensitivity to irrelevant image properties. The specific structure of multi-local features allows us to generate new feature exemplars by linear combinations which effectively increases the set of stored training exemplars. We demonstrate that a multi-local feature is a good "weak detector" of shape-based object categories and that it can accurately estimate the bounding box of objects in an image. Using just a single multi-local feature descriptor we obtain detection results comparable to those of more complex and elaborate systems. It is our opinion that multi-local features have a great potential as generic object descriptors with very interesting possibilities of feature sharing within and between classes.

  • 161.
    Danielsson, Oscar Martin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Category-sensitive hashing and bloom filter based descriptors for online keypoint recognition2015Ingår i: 19th Scandinavian Conference on Image Analysis, SCIA 2015, Springer, 2015, 329-340 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose a method for learning a categorysensitive hash function (i.e. a hash function that tends to map inputs from the same category to the same hash bucket) and a feature descriptor based on the Bloom filter. Category-sensitive hash functions are robust to intra-category variation. In this paper we use them to produce descriptors that are invariant to transformations caused by for example viewpoint changes, lighting variation and deformation. Since the descriptors are based on Bloom filters, they support a ”union” operation. So descriptors of matched features can be aggregated by taking their union.We thus end up with one descriptor per keypoint instead of one descriptor per feature (By keypoint we refer to a world-space reference point and by feature we refer to an image-space interest point. Features are typically observations of keypoints and matched features are observations of the same keypoint). In short, the proposed descriptor has data-defined invariance properties due to the category-sensitive hashing and is aggregatable due to its Bloom filter inheritance. This is useful whenever we require custom invariance properties (e.g. tracking of deformable objects) and/or when we make multiple observations of each keypoint (e.g. tracking, multi-view stereo or visual SLAM).

  • 162.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Rasolzadeh, Babak
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Nonlinear classification of data2012Patent (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

    The present invention relates to a method for nonlinear classification of high dimensional data by means of boosting, whereby a target class with significant intra-class variation is classified against a large background class, where the boosting algorithm produces a strong classifier, the strong classifier being a linear combination of weak classifiers. The present invention specifically teaches that weak classifiers classifiers h 1, h 2, that individually more often than not generate a positive on instances within the target class and a negative on instances outside of the target class, but that never generate a positive simultaneously on one and the same target instance, are categorized as a group of anti-correlated classifiers, and that the occurrence of anti-correlated classifiers from the same group will generate a negative.

  • 163.
    Danielsson, Oscar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Rasolzadeh, Babak
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gated Classifiers: Boosting under high intra-class variation2011Ingår i: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, 2011, 2673-2680 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we address the problem of using boosting (e.g. AdaBoost [7]) to classify a target class with significant intra-class variation against a large background class. This situation occurs for example when we want to recognize a visual object class against all other image patches. The boosting algorithm produces a strong classifier, which is a linear combination of weak classifiers. We observe that we often have sets of weak classifiers that individually fire on many examples of the target class but never fire together on those examples (i.e. their outputs are anti-correlated on the target class). Motivated by this observation we suggest a family of derived weak classifiers, termed gated classifiers, that suppress such combinations of weak classifiers. Gated classifiers can be used on top of any original weak learner. We run experiments on two popular datasets, showing that our method reduces the required number of weak classifiers by almost an order of magnitude, which in turn yields faster detectors. We experiment on synthetic data showing that gated classifiers enables more complex distributions to be represented. We hope that gated classifiers will extend the usefulness of boosted classifier cascades [29].

  • 164. Davies, A.
    et al.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dalton, C.
    Campbell, N.
    Generating 3D Morphable Model parameters for facial tracking: Factorising identity and expression2012Ingår i: GRAPP 2012 IVAPP 2012 - Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications, 2012, 309-318 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    The ability to factorise parameters into identity and expression parameters is highly desirable in facial tracking as it requires only the identity parameters to be set in the initial frame leaving the expression parameters to be adjusted in subsequent frames. In this paper we introduce a strategy for creating parameters for a data-driven 3D Morphable Model (3DMM) which are able to separately model the variance due to identity and expression found in the training data. We present three factorisation schemes and evaluate their appropriateness for tracking by comparing the variances between the identity coefficients and expression coefficients when fitted to data of individuals performing different facial expressions.

  • 165. Davies, Alexander
    et al.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dalton, Colin J.
    Campbell, Neill
    Facial Movement Based Recognition2011Ingår i: 5th International Conference on Computer Vision/Computer Graphics Collaboration Techniques, MIRAGE 2011, 2011, 51-62 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    The modelling and understanding of the facial dynamics of individuals is crucial to achieving higher levels of realistic facial animation. We address the recognition of individuals through modelling the facial motions of several subjects. Modelling facial motion comes with numerous challenges including accurate and robust tracking of facial movement, high dimensional data processing and non-linear spatial-temporal structural motion. We present a novel framework which addresses these problems through the use of video-specific Active Appearance Models (AAM) and Gaussian Process Latent Variable Models (GP-LVM). Our experiments and results qualitatively and quantitatively demonstrate the framework's ability to successfully differentiate individuals by temporally modelling appearance invariant facial motion. Thus supporting the proposition that a facial activity model may assist in the areas of motion retargeting, motion synthesis and experimental psychology.

  • 166.
    Detry, Renaud
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning a dictionary of prototypical grasp-predicting parts from grasping experience2013Ingår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, 601-608 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. We focus our report on the definition of a similarity measure that reflects whether the shapes of two parts resemble each other, and whether their associated grasps are applied near one another. We present an experiment in which our agent extracts five prototypical parts from thirty-two real-world grasp examples, and we demonstrate the applicability of the prototypical parts for grasping novel objects.

  • 167.
    Detry, Renaud
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Piater, Justus
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Generalizing grasps across partly similar objects2012Ingår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, 3791-3797 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    The paper starts by reviewing the challenges associated to grasp planning, and previous work on robot grasping. Our review emphasizes the importance of agents that generalize grasping strategies across objects, and that are able to transfer these strategies to novel objects. In the rest of the paper, we then devise a novel approach to the grasp transfer problem, where generalization is achieved by learning, from a set of grasp examples, a dictionary of object parts by which objects are often grasped. We detail the application of dimensionality reduction and unsupervised clustering algorithms to the end of identifying the size and shape of parts that often predict the application of a grasp. The learned dictionary allows our agent to grasp novel objects which share a part with previously seen objects, by matching the learned parts to the current view of the new object, and selecting the grasp associated to the best-fitting part. We present and discuss a proof-of-concept experiment in which a dictionary is learned from a set of synthetic grasp examples. While prior work in this area focused primarily on shape analysis (parts identified, e.g., through visual clustering, or salient structure analysis), the key aspect of this work is the emergence of parts from both object shape and grasp examples. As a result, parts intrinsically encode the intention of executing a grasp.

  • 168. Dotsenko, Vladimir
    et al.
    Vejdemo-Johansson, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Implementing Gröbner bases for operads2009Ingår i: Séminaires et Congrès, Vol. 26, 77-98 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an implementation of the algorithm for computing Grobner bases for operads due to the rst author and A. Khoroshkin. We discuss the actual algorithms, the choices made for the implementation platform and the data representation, and strengths and weaknesses of our approach.

  • 169. Dotsenko, Vladimir
    et al.
    Vejdemo-Johansson, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Operadic Gröbner bases: an implementation2010Ingår i: Mathematical Software–ICMS 2010, Springer Berlin/Heidelberg, 2010, 249-252 s.Kapitel i bok, del av antologi (Refereegranskat)
  • 170.
    Drimus, Alin
    et al.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bilberg, A.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Classification of Rigid and Deformable Objects Using a Novel Tactile Sensor2011Ingår i: Proceedings of the 15th International Conference on Advanced Robotics (ICAR), IEEE , 2011, 427-434 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we present a novel tactile-array sensor for use in robotic grippers based on flexible piezoresistive rubber. We start by describing the physical principles of piezoresistive materials, and continue by outlining how to build a flexible tactile-sensor array using conductive thread electrodes. A real-time acquisition system scans the data from the array which is then further processed. We validate the properties of the sensor in an application that classifies a number of household objects while performing a palpation procedure with a robotic gripper. Based on the haptic feedback, we classify various rigid and deformable objects. We represent the array of tactile information as a time series of features and use this as the input for a k-nearest neighbors classifier. Dynamic time warping is used to calculate the distances between different time series. The results from our novel tactile sensor are compared to results obtained from an experimental setup using a Weiss Robotics tactile sensor with similar characteristics. We conclude by exemplifying how the results of the classification can be used in different robotic applications.

  • 171. Drimus, Alin
    et al.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bilberg, Arne
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Design of a flexible tactile sensor for classification of rigid and deformable objects2014Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 62, nr 1, 3-15 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits.

  • 172. Egerstedt, M
    et al.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Shakernia, O
    Lygeros, J
    Toward Optimal Control of Switched Linear Systems2000Konferensbidrag (Refereegranskat)
    Abstract [en]

    We investigate the problem of driving the state of a switched linear control system between boundary states. We propose tight lower bounds for the minimum energy control problem. Furthermore, we show that the change of the system dynamics across the switching surface gives rise to phenomena that can be treated as a decidability problem of hybrid systems. Applying earlier results on controller synthesis for hybrid systems with linear continuous dynamics, we provide an algorithm for computing the minimum number of switchings of a trajectory from one state to another, and show that this algorithm is computable for a fairly wide class of linear switched systems

  • 173. Egerstedt, Magnus
    et al.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A control theoretic formulation of the generalized SLAM problem in robotics2008Ingår i: 2008 American Control Conference: Vols 1-12, 2008, 2409-2414 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Simultaneous Localization and Mapping (SLAM) has emerged as a key capability for autonomous mobile robots navigating in unknown environments. The basic idea behind SLAM is to concurrently obtain a map of the environment and an estimate of where the robot is placed within this map. In other words, the map and the robot's pose have to be estimated at the same time, given the same data set. This paper revisits this problem from a control theoretic vantage point by reformulating the SLAM problem as a problem of simultaneously estimating the state and the output map of a controlled, dynamical system. What is different with this formulation is that the map is contained in the output map and not, as previously done, in the state of the system.

  • 174.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    The importance of structure2011Konferensbidrag (Refereegranskat)
  • 175.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exploring affordances in robot grasping through latent structure representation2010Ingår i: The 11th European Conference on Computer Vision (ECCV 2010), 2010Konferensbidrag (Refereegranskat)
  • 176.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Task Modeling in Imitation Learning using Latent Variable Models2010Ingår i: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, 458-553 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    An important challenge in robotic research is learning and reasoning about different manipulation tasks from scene observations. In this paper we present a probabilistic model capable of modeling several different types of input sources within the same model. Our model is capable to infer the task using only partial observations. Further, our framework allows the robot, given partial knowledge of the scene, to reason about what information streams to acquire in order to disambiguate the state-space the most. We present results for task classification within and also reason about different features discriminative power for different classes of tasks.

  • 177.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Conditional Structures in Graphical Models from a Large Set of Observation Streams through efficient Discretisation2011Ingår i: IEEE International Conference on Robotics and Automation, Workshop on Manipulation under Uncertainty, 2011Konferensbidrag (Refereegranskat)
  • 178.
    Ekekrantz, Johan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Adaptive Iterative Closest Keypoint2013Ingår i: 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings, New York: IEEE , 2013, 80-87 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Finding accurate correspondences between overlapping 3D views is crucial for many robotic applications, from multi-view 3D object recognition to SLAM. This step, often referred to as view registration, plays a key role in determining the overall system performance. In this paper, we propose a fast and simple method for registering RGB-D data, building on the principle of the Iterative Closest Point (ICP) algorithm. In contrast to ICP, our method exploits both point position and visual appearance and is able to smoothly transition the weighting between them with an adaptive metric. This results in robust initial registration based on appearance and accurate final registration using 3D points. Using keypoint clustering we are able to utilize a non exhaustive search strategy, reducing runtime of the algorithm significantly. We show through an evaluation on an established benchmark that the method significantly outperforms current methods in both robustness and precision.

  • 179.
    Ekekrantz, Johan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Thippur, Akshaya
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Probabilistic Primitive Refinement algorithm for colored point cloud data2015Ingår i: 2015 European Conference on Mobile Robots (ECMR), Lincoln: IEEE conference proceedings, 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we present the Probabilistic Primitive Refinement (PPR) algorithm, an iterative method for accurately determining the inliers of an estimated primitive (such as planes and spheres) parametrization in an unorganized, noisy point cloud. The measurement noise of the points belonging to the proposed primitive surface are modelled using a Gaussian distribution and the measurements of extraneous points to the proposed surface are modelled as a histogram. Given these models, the probability that a measurement originated from the proposed surface model can be computed. Our novel technique to model the noisy surface from the measurement data does not require a priori given parameters for the sensor noise model. The absence of sensitive parameters selection is a strength of our method. Using the geometric information obtained from such an estimate the algorithm then builds a color-based model for the surface, further boosting the accuracy of the segmentation. If used iteratively the PPR algorithm can be seen as a variation of the popular mean-shift algorithm with an adaptive stochastic kernel function.

  • 180.
    Eklundh, Jan-Olof
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Recognition of Objects in the Real World from a Systems Perspective2005Ingår i: Kuenstliche Intelligenz, ISSN 0933-1875, Vol. 19, nr 2, 12-17 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Based on a discussion of the requirements for a vision system operating in the real world we present a real-time system that includes a set of behaviours that makes it capable of handling a series of typical tasks. The system is able to localise objects of interests based on multiple cues, attend to the objects and finally recognise them while they are in fixation. A particular aspect of the system concerns the use of 3D cues. We end by showing the system running in practice and present results highlighting the merits of 3D-based attention and segmentation and multiple cues for recognition.

  • 181. Eklundh, Jan-Olof
    et al.
    Uhlin, Tomas
    Nordlund, Peter
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Active Vision and Seeing Robots1996Ingår i: International Symposium on Robotics Research, 1996Konferensbidrag (Refereegranskat)
  • 182. Eklundh, Jan-Olof
    et al.
    Uhlin, Tomas
    Nordlund, Peter
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Developing an Active Observer1995Ingår i: Asian Conference on Computer Vision, 1995, Vol. 1035, 181-190 s.Konferensbidrag (Refereegranskat)
  • 183.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Aarno, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Online task recognition and real-time adaptive assistance for computer-aided machine control2006Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 22, nr 5, 1029-1033 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Segmentation and recognition of operator-generated motions are commonly facilitated to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online, thus improving the performance in terms of execution time and overall precision. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we present a method for online task tracking and propose the use of adaptive virtual fixtures that can cope with the above problems. Here, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance, thus providing the online decision of how to fixture the movement.

  • 184. Ekvall, Staffan
    et al.
    Aarno, Daniel
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Task Learning Using Graphical Programming and Human Demonstrations2006Ingår i: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2006, 398-403 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    The next generation of robots will have to learn new tasks or refine the existing ones through direct interaction with the environment or through a teaching/coaching process in programming by demonstration (PbD) and learning by instruction frameworks. In this paper, we propose to extend the classical PbD approach with a graphical language that makes robot coaching easier. The main idea is based on graphical programming where the user designs complex robot tasks by using a set of low-level action primitives. Different to other systems, our action primitives are made general and flexible so that the user can train them online and therefore easily design high level tasks

  • 185.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating active mobile robot object recognition and SLAM in natural environments2006Ingår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, 5792-5797 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Linking semantic and spatial information has become an important research area in robotics since, for robots interacting with humans and performing tasks in natural environments, it is of foremost importance to be able to reason beyond simple geometrical and spatial levels. In this paper, we consider this problem in a service robot scenario where a mobile robot autonomously navigates in a domestic environment, builds a map as it moves along, localizes its position in it, recognizes objects on its way and puts them in the map. The experimental evaluation is performed in a realistic setting where the main concentration is put on the synergy of object recognition and Simultaneous Localization and Mapping systems.

  • 186.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasp recognition for programming by demonstration2005Ingår i: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, NEW YORK, NY: IEEE , 2005, 748-753 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    The demand for flexible and re-programmable robots has increased the need for programming by demonstration systems. In this paper, grasp recognition is considered in a programming by demonstration framework. Three methods for grasp recognition are presented and evaluated. The first method uses Hidden Markov Models to model the hand posture sequence during the grasp sequence, while the second method relies on the hand trajectory and hand rotation. The third method is a hybrid method, in which both the first two methods are active in parallel. The particular contribution is that all methods rely on the grasp sequence and not just the final posture of the hand. This facilitates grasp recognition before the grasp is completed. Also, by analyzing the entire sequence and not just the final grasp, the decision is based on more information and increased robustness of the overall system is achieved. The experimental results show that both arm trajectory and final hand posture provide important information for grasp classification. By combining them, the recognition rate of the overall system is increased.

  • 187. Ekvall, Staffan
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating Object and Grasp Recognition for Dynamic Scene interpretation2005Ingår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper, we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, programming by demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it

  • 188.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning and evaluation of the approach vector for automatic grasp generation and planning2007Ingår i: Proceedings - IEEE International Conference on Robotics and Automation: Vols 1-10, 2007, 4715-4720 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we address the problem of automatic grasp generation for robotic hands where experience and shape primitives are used in synergy so to provide a basis not only for grasp generation but also for a grasp evaluation process when the exact pose of the object is not available. One of the main challenges in automatic grasping is the choice of the object approach vector, which is dependent both on the object shape and pose as well as the grasp type. Using the proposed method, the approach vector is chosen not only based on the sensory input but also on experience that some approach vectors will provide useful tactile information that finally results in stable grasps. A methodology for developing and evaluating grasp controllers is presented where the focus lies on obtaining stable grasps under imperfect vision. The method is used in a teleoperation or a Programming by Demonstration setting where a human demonstrates to a robot how to grasp an object. The system first recognizes the object and grasp type which can then be used by the robot to perform the same action using a mapped version of the human grasping posture.

  • 189.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Receptive field cooccurrence histograms for object detection2005Ingår i: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-4, 2005, 3969-3974 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Object recognition is one of the major research topics in the field of computer vision. In robotics, there is often a need for a system that can locate certain objects in the environment - the capability which we denote as 'object detection'. In this paper, we present a new method for object detection. The method is especially suitable for detecting objects in natural scenes, as it is able to cope with problems such as complex background, varying illumination and object occlusion. The proposed method uses the receptive field representation where each pixel in the image is represented by a combination of its color and response to different filters. Thus, the cooccurrence of certain filter responses within a specific radius in the image serves as information basis for building the representation of the object. The specific goal in this work is the development of an on-line learning scheme that is effective after just one training example but still has the ability to improve its performance with more time and new examples. We describe the details behind the algorithm and demonstrate its strength with an extensive experimental evaluation.

  • 190.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Object detection and mapping for service robot tasks2007Ingår i: Robotica (Cambridge. Print), ISSN 0263-5747, E-ISSN 1469-8668, Vol. 25, 175-187 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The problem studied in this paper is a mobile robot that autonomously navigates in a domestic: environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection/ recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.

  • 191. Ekvall, Stefan
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Task Models from Multiple Human Demonstrations2006Ingår i: Robot and Human Interactive Communication, 2006. ROMAN 2006. The 15th IEEE International Symposium on Issue Date: 6-8 Sept. 2006, 2006, 358-363 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we present a novel method for learning robot tasks from multiple demonstrations. Each demonstrated task is decomposed into subtasks that allow for segmentation and classification of the input data. The demonstrated tasks are then merged into a flexible task model, describing the task goal and its constraints. The two main contributions of the paper are the state generation and contraints identification methods. We also present a task level planner, that is used to assemble a task plan at run-time, allowing the robot to choose the best strategy depending on the current world state

  • 192.
    Elfwing, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Embodied Evolution of Learning Ability2007Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asynchronous, and autonomous properties of biological evolution. The evaluation, selection, and reproduction are carried out by cooperation and competition of the robots, without any need for human intervention. An embodied evolution framework is therefore well suited to study the adaptive learning mechanisms for artificial agents that share the same fundamental constraints as biological agents: self-preservation and self-reproduction.

    The main goal of the research in this thesis has been to develop a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing of subpopulations of virtual agents inside each robot. The framework integrates reproduction as a directed autonomous behavior, and allows for learning of basic behaviors for survival by reinforcement learning. The purpose of the evolution is to evolve the learning ability of the agents, by optimizing meta-properties in reinforcement learning, such as the selection of basic behaviors, meta-parameters that modulate the efficiency of the learning, and additional and richer reward signals that guides the learning in the form of shaping rewards. The realization of the embodied evolution framework has been a cumulative research process in three steps: 1) investigation of the learning of a cooperative mating behavior for directed autonomous reproduction; 2) development of an embodied evolution framework, in which the selection of pre-learned basic behaviors and the optimization of battery recharging are evolved; and 3) development of an embodied evolution framework that includes meta-learning of basic reinforcement learning behaviors for survival, and in which the individuals are evaluated by an implicit and biologically inspired fitness function that promotes reproductive ability. The proposed embodied evolution methods have been validated in a simulation environment of the Cyber Rodent robot, a robotic platform developed for embodied evolution purposes. The evolutionarily obtained solutions have also been transferred to the real robotic platform.

    The evolutionary approach to meta-learning has also been applied for automatic design of task hierarchies in hierarchical reinforcement learning, and for co-evolving meta-parameters and potential-based shaping rewards to accelerate reinforcement learning, both in regards to finding initial solutions and in regards to convergence to robust policies.

  • 193.
    Elfwing, Stefan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Uchibe, E.
    Neural Computation Unit, Initial Research Project, Okinawa Institute of Science and Technology, Japan.
    Doya, K.
    Neural Computation Unit, Initial Research Project, Okinawa Institute of Science and Technology, Japan.
    Christensen, Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Evolutionary Development of Hierarchical Learning Structures2007Ingår i: IEEE Transactions on Evolutionary Computation, ISSN 1089-778X, E-ISSN 1941-0026, Vol. 11, nr 2, 249-264 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Hierarchical reinforcement learning (RL) algorithms can learn a policy faster than standard RL algorithms. However, the applicability of hierarchical RL algorithms is limited by the fact that the task decomposition has to be performed in advance by the human designer. We propose a Lamarckian evolutionary approach for automatic development of the learning structure in hierarchical RL. The proposed method combines the MAXQ hierarchical RL method and genetic programming (GP). In the MAXQ framework, a subtask can optimize the policy independently of its parent task's policy, which makes it possible to reuse learned policies of the subtasks. In the proposed method, the MAXQ method learns the policy based on the task hierarchies obtained by GP, while the GP explores the appropriate hierarchies using the result of the MAXQ method. To show the validity of the proposed method, we have performed simulation experiments for a foraging task in three different environmental settings. The results show strong interconnection between the obtained learning structures and the 'given task environments. The main conclusion of the experiments is that the GP can find a minimal strategy, i.e., a hierarchy that minimizes the number of primitive subtasks that can be executed for each type of situation. The experimental results for the most challenging environment also show that the policies of the subtasks can continue to improve, even after the structure of the hierarchy has been evolutionary stabilized, as an effect of Lamarckian mechanisms.

  • 194.
    Elfwing, Stefan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Uchibe, Eiji
    Doya, Kenji
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Darwinian Embodied Evolution of the Learning Ability for Survival2011Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 19, nr 2, 101-102 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we propose a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing in subpopulations of virtual agents hosted in each robot. Within this framework, we explore the combination of within-generation learning of basic survival behaviors by reinforcement learning, and evolutionary adaptations over the generations of the basic behavior selection policy, the reward functions, and metaparameters for reinforcement learning. We apply a biologically inspired selection scheme, in which there is no explicit communication of the individuals' fitness information. The individuals can only reproduce offspring by mating-a pair-wise exchange of genotypes-and the probability that an individual reproduces offspring in its own subpopulation is dependent on the individual's "health," that is, energy level, at the mating occasion. We validate the proposed method by comparing it with evolution using standard centralized selection, in simulation, and by transferring the obtained solutions to hardware using two real robots.

  • 195.
    Eriksson, André
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A formal approach to anomaly detection2016Ingår i: ICPRAM 2016 - Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods, SciTePress, 2016, 317-326 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    While many advances towards effective anomaly detection techniques targeting specific applications have been made in recent years, little work has been done to develop application-agnostic approaches to the subject. In this article, we present such an approach, in which anomaly detection methods are treated as formal, structured objects. We consider a general class of methods, with an emphasis on methods that utilize structural properties of the data they operate on. For this class of methods, we develop a decomposition into sub-methods-simple, restricted objects, which may be reasoned about independently and combined to form methods. As we show, this formalism enables the construction of software that facilitates formulating, implementing, evaluating, as well as algorithmically finding and calibrating anomaly detection methods.

  • 196.
    Eriksson, Elina
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Människa-datorinteraktion, MDI.
    Bälter, Olle
    KTH, Skolan för datavetenskap och kommunikation (CSC), Människa-datorinteraktion, MDI.
    Engwall, Olov
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Öster, Anne-Marie
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Design Recommendations for a Computer-Based Speech Training System Based on End User Interviews2005Ingår i: Proceedings of the Tenth International Conference on Speech and Computers, 2005, 483-486 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This study has been performed in order to improve theusability of computer-based speech training (CBST) aids.The aim was to engage the users of speech training systemsin the first step of creating a new CBST aid. Speechtherapists and children with hearing- or speech impairmentwere interviewed and the result of the interviews ispresented in the form of design recommendations.

  • 197.
    Eriksson, Martin
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Maximizing validity in 2D motion analysis2004Ingår i: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 2 / [ed] Kittler, J; Petrou, M; Nixon, M, 2004, 179-183 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Classifying and analyzing human motion from a video is relatively common in many areas. Since the motion is carried out in 3D space, the 2D projection provided by a video is somewhat limiting. The question we are investigating in this article is how much information is actually lost when going from 3D to 2D and how this information loss depends on factors, such as viewpoint and tracking errors that inevitably will occur if the 2D sequences are analysed automatically.

  • 198.
    Eriksson, Martin
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Monocular reconstruction of human motion by qualitative selection2004Ingår i: SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS, LOS ALAMITOS: IEEE COMPUTER SOC , 2004, 863-868 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    One of the main difficulties when reconstructing human motion from monocular video is the depth ambiguity. Achieving a reconstruction, given the projection of the joints, can be regarded as a search-problem, where the objective is to find the most likely configuration. One inherent problem in such a formulation is the definition of "most likely". In this work we will pick the configuration that best complies with a set of training-data in a qualitative sense. The reason for doing this is to allow for large individual variation within the class of motions, and avoid an extreme bias towards the training-data. In order to capture the qualitative constraints, we have used a set of 3D motion capture data of walking people. The method is tested on orthographic projections of motion capture data, in order to compare the achieved reconstruction with the original motion.

  • 199. Faeulhammer, Thomas
    et al.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Burbridge, Christopher
    Zillich, Micheal
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hawes, Nick
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Vincze, Marcus
    Autonomous Learning of Object Models on a Mobile Robot2017Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, nr 1, 26-33 s., 7393491Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

  • 200.
    Fagerström, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Galilean Differential Geometry of Moving Images2004Ingår i: Computer Vision - ECCV 2004 / [ed] Pajdla, Tomás and Matas, Jirí, Berlin / Heidelberg: Springer , 2004, Vol. 3024, 97-101 s.Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    In this paper we develop a systematic theory about local structure of moving images in terms of Galilean differential invariants. We argue that Galilean invariants are useful for studying moving images as they disregard constant motion that typically depends on the motion of the observer or the observed object, and only describe relative motion that might capture surface shape and motion boundaries. The set of Galilean invariants for moving images also contains the Euclidean invariants for (still) images. Complete sets of Galilean invariants are derived for two main cases: when the spatio-temporal gradient cuts the image plane and when it is tangent to the image plane. The former case correspond to isophote curve motion and the later to creation and disappearance of image structure, a case that is not well captured by the theory of optical flow. The derived invariants are shown to be describable in terms of acceleration, divergence, rotation and deformation of image structure. The described theory is completely based on bottom up computation from local spatio-temporal image information.

1234567 151 - 200 av 679
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf