Ändra sökning
Avgränsa sökresultatet
2345678 201 - 250 av 1863
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 201.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Interactive Perception: From Scenes to Objects2012Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    This thesis builds on the observation that robots, like humans, do not have enough experience to handle all situations from the start. Therefore they need tools to cope with new situations, unknown scenes and unknown objects. In particular, this thesis addresses objects. How can a robot realize what objects are if it looks at a scene and has no knowledge about objects? How can it recover from situations where its hypotheses about what it sees are wrong? Even if it has built up experience in form of learned objects, there will be situations where it will be uncertain or mistaken, and will therefore still need the ability to correct errors. Much of our daily lives involves interactions with objects, and the same will be true robots existing among us. Apart from being able to identify individual objects, the robot will therefore need to manipulate them.

    Throughout the thesis, different aspects of how to deal with these questions is addressed. The focus is on the problem of a robot automatically partitioning a scene into its constituting objects. It is assumed that the robot does not know about specific objects, and is therefore considered inexperienced. Instead a method is proposed that generates object hypotheses given visual input, and then enables the robot to recover from erroneous hypotheses. This is done by the robot drawing from a human's experience, as well as by enabling it to interact with the scene itself and monitoring if the observed changes are in line with its current beliefs about the scene's structure.

    Furthermore, the task of object manipulation for unknown objects is explored. This is also used as a motivation why the scene partitioning problem is essential to solve. Finally aspects of monitoring the outcome of a manipulation is investigated by observing the evolution of flexible objects in both static and dynamic scenes. All methods that were developed for this thesis have been tested and evaluated on real robotic platforms. These evaluations show the importance of having a system capable of recovering from errors and that the robot can take advantage of human experience using just simple commands.

  • 202.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Roberson-Johnson, Matthew
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active Scene Analysis2010Konferensbidrag (Refereegranskat)
  • 203.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Yamakawa, Yuji
    Senoo, Taku
    Ishikawa, Masatoshi
    On-line learning of temporal state models for flexible objects2012Ingår i: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2012, s. 712-718Konferensbidrag (Refereegranskat)
    Abstract [en]

    State estimation and control are intimately related processes in robot handling of flexible and articulated objects. While for rigid objects, we can generate a CAD model before-hand and a state estimation boils down to estimation of pose or velocity of the object, in case of flexible and articulated objects, such as a cloth, the representation of the object's state is heavily dependent on the task and execution. For example, when folding a cloth, the representation will mainly depend on the way the folding is executed.

  • 204.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Yamakawa, Yuji
    Tokyo University.
    Senoo, Taku
    Tokyo University.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ishikawa, Masatoshi
    Tokyo University.
    State Recognition of Deformable Objects Using Shape Context2011Ingår i: The 29th Annual Conference of the Robotics Society of Japan, 2011Konferensbidrag (Övrigt vetenskapligt)
  • 205. Bernander, Karl B.
    et al.
    Gustavsson, Kenneth
    Selig, Bettina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Sintorn, Ida-Maria
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Luengo Hendriks, Cris L.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Improving the stochastic watershed2013Ingår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 34, nr 9, s. 993-1000Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The stochastic watershed is an unsupervised segmentation tool recently proposed by Angulo and Jeulin. By repeated application of the seeded watershed with randomly placed markers, a probability density function for object boundaries is created. In a second step, the algorithm then generates a meaningful segmentation of the image using this probability density function. The method performs best when the image contains regions of similar size, since it tends to break up larger regions and merge smaller ones. We propose two simple modifications that greatly improve the properties of the stochastic watershed: (1) add noise to the input image at every iteration, and (2) distribute the markers using a randomly placed grid. The noise strength is a new parameter to be set, but the output of the algorithm is not very sensitive to this value. In return, the output becomes less sensitive to the two parameters of the standard algorithm. The improved algorithm does not break up larger regions, effectively making the algorithm useful for a larger class of segmentation problems.

  • 206.
    Bernard, Florian
    et al.
    MPI Informatics, Saarland Informatics Campus, Saarbrücken, Germany.
    Thunberg, Johan
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS).
    Goncalves, Jorge
    LCSB, University of Luxembourg, Esch-sur-Alzette, Luxembourg.
    Theobalt, Christian
    MPI Informatics, Saarland Informatics Campus, Saarbrücken, Germany.
    Synchronisation of partial multi-matchings via non-negative factorisations2019Ingår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 92, s. 146-155Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this work we study permutation synchronisation for the challenging case of partial permutations, which plays an important role for the problem of matching multiple objects (e.g. images or shapes). The term synchronisation refers to the property that the set of pairwise matchings is cycle-consistent, i.e. in the full matching case all compositions of pairwise matchings over cycles must be equal to the identity. Motivated by clustering and matrix factorisation perspectives of cycle-consistency, we derive an algo- rithm to tackle the permutation synchronisation problem based on non-negative factorisations. In order to deal with the inherent non-convexity of the permutation synchronisation problem, we use an initialisation procedure based on a novel rotation scheme applied to the solution of the spectral relaxation. Moreover, this rotation scheme facilitates a convenient Euclidean projection to obtain a binary solution after solving our relaxed problem. In contrast to state-of-the-art methods, our approach is guaranteed to produce cycle-consistent results. We experimentally demonstrate the efficacy of our method and show that it achieves better results compared to existing methods. © 2019 Elsevier Ltd

  • 207. Berrada, Dounia
    et al.
    Romero, Mario
    Georgia Institute of Technology, US.
    Abowd, Gregory
    Blount, Marion
    Davis, John
    Automatic Administration of the Get Up and Go Test2007Ingår i: HealthNet'07: Proceedings of the 1st ACM SIGMOBILE International Workshop on Systems and Networking Support for Healthcare and Assisted Living Environments, ACM Digital Library, 2007, s. 73-75Konferensbidrag (Refereegranskat)
    Abstract [en]

    In-home monitoring using sensors has the potential to improve the life of elderly and chronically ill persons, assist their family and friends in supervising their status, and provide early warning signs to the person's clinicians. The Get Up and Go test is a clinical test used to assess the balance and gait of a patient. We propose a way to automatically apply an abbreviated version of this test to patients in their residence using video data without body-worn sensors or markers.

  • 208.
    Bevilacqua, Fernando
    et al.
    Federal University of Fronteira Sul, Chapecó, Brazil.
    Backlund, Per
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Engström, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Proposal for Non-contact Analysis of Multimodal Inputs to Measure Stress Level in Serious Games2015Ingår i: VS-Games 2015: 7th International Conference on Games and Virtual Worlds for Serious Applications / [ed] Per Backlund, Henrik Engström & Fotis Liarokapis, Red Hook, NY: IEEE Computer Society, 2015, s. 171-174Konferensbidrag (Refereegranskat)
    Abstract [en]

    The process of monitoring user emotions in serious games or human-computer interaction is usually obtrusive. The work-flow is typically based on sensors that are physically attached to the user. Sometimes those sensors completely disturb the user experience, such as finger sensors that prevent the use of keyboard/mouse. This short paper presents techniques used to remotely measure different signals produced by a person, e.g. heart rate, through the use of a camera and computer vision techniques. The analysis of a combination of such signals (multimodal input) can be used in a variety of applications such as emotion assessment and measurement of cognitive stress. We present a research proposal for measurement of player’s stress level based on a non-contact analysis of multimodal user inputs. Our main contribution is a survey of commonly used methods to remotely measure user input signals related to stress assessment.

  • 209.
    Bešenić, Krešimir
    et al.
    Faculty of Electrical Engineering and Computing, University of Zagreb,.
    Ahlberg, Jörgen
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Pandžić, Igor
    Faculty of Electrical Engineering and Computing, University of Zagreb.
    Unsupervised Facial Biometric Data Filtering for Age and Gender Estimation2019Ingår i: Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), SciTePress, 2019, Vol. 5, s. 209-217Konferensbidrag (Refereegranskat)
    Abstract [en]

    Availability of large training datasets was essential for the recent advancement and success of deep learning methods. Due to the difficulties related to biometric data collection, datasets with age and gender annotations are scarce and usually limited in terms of size and sample diversity. Web-scraping approaches for automatic data collection can produce large amounts weakly labeled noisy data. The unsupervised facial biometric data filtering method presented in this paper greatly reduces label noise levels in web-scraped facial biometric data. Experiments on two large state-of-the-art web-scraped facial datasets demonstrate the effectiveness of the proposed method, with respect to training and validation scores, training convergence, and generalization capabilities of trained age and gender estimators.

  • 210.
    Bhat, Goutam
    et al.
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Danelljan, Martin
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Khan, Fahad
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Incept Inst Artificial Intelligence, U Arab Emirates.
    Felsberg, Michael
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.
    Combining Local and Global Models for Robust Re-detection2018Ingår i: 2018 15TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), IEEE , 2018, s. 25-30Konferensbidrag (Refereegranskat)
    Abstract [en]

    Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual tracking. However, these methods still struggle in occlusion and out-of-view scenarios due to the absence of a re-detection component. While such a component requires global knowledge of the scene to ensure robust re-detection of the target, the standard DCF is only trained on the local target neighborhood. In this paper, we augment the state-of-the-art DCF tracking framework with a re-detection component based on a global appearance model. First, we introduce a tracking confidence measure to detect target loss. Next, we propose a hard negative mining strategy to extract background distractors samples, used for training the global model. Finally, we propose a robust re-detection strategy that combines the global and local appearance model predictions. We perform comprehensive experiments on the challenging UAV123 and LTB35 datasets. Our approach shows consistent improvements over the baseline tracker, setting a new state-of-the-art on both datasets.

  • 211.
    Bhatt, Mehul
    et al.
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    Dylla, Frank
    SFB/TR 8 Spatial Cognition, University of Bremen, Bremen, Germany.
    A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems2009Ingår i: International Journal of Robotics and Automation, ISSN 0826-8185, Vol. 24, nr 3, s. 235-244Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Ambient intelligence environments necessitate representing and reasoning about dynamic spatial scenes and configurations. The ability to perform predictive and explanatory analyses of spatial scenes is crucial towards serving a useful intelligent function within such environments. We present a formal qualitative model that combines existing qualitative theories about space with it formal logic-based calculus suited to modelling dynamic environments, or reasoning about action and change in general. With this approach, it is possible to represent and reason about arbitrary dynamic spatial environments within a unified framework. We clarify and elaborate on our ideas with examples grounded in a smart environment.

  • 212.
    Bhatt, Mehul
    et al.
    Department of Computer Science, La Trobe University, Germany.
    Loke, Seng
    Department of Computer Science, La Trobe University, Germany.
    Modelling Dynamic Spatial Systems in the Situation Calculus2008Ingår i: Spatial Cognition and Computation, ISSN 1387-5868, E-ISSN 1573-9252, Vol. 8, nr 1-2, s. 86-130Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose and systematically formalise a dynamical spatial systems approach for the modelling of changing spatial environments. The formalisation adheres to the semantics of the situation calculus and includes a systematic account of key aspects that are necessary to realize a domain-independent qualitative spatial theory that may be utilised across diverse application domains. The spatial theory is primarily derivable from the all-pervasive generic notion of "qualitative spatial calculi" that are representative of differing aspects of space. In addition, the theory also includes aspects, both ontological and phenomenal in nature, that are considered inherent in dynamic spatial systems. Foundational to the formalisation is a causal theory that adheres to the representational and computational semantics of the situation calculus. This foundational theory provides the necessary (general) mechanism required to represent and reason about changing spatial environments and also includes an account of the key fundamental epistemological issues concerning the frame and the ramification problems that arise whilst modelling change within such domains. The main advantage of the proposed approach is that based on the structure and semantics of the proposed framework, fundamental reasoning tasks such as projection and explanation directly follow. Within the specialised spatial reasoning domain, these translate to spatial planning/re-configuration, causal explanation and spatial simulation. Our approach is based on the hypothesis that alternate formalisations of existing qualitative spatial calculi using high-level tools such as the situation calculus are essential for their utilisation in diverse application domains such as intelligent systems, cognitive robotics and event-based GIS.

  • 213. Bi, Yin
    et al.
    Lv, Mingsong
    Wei, Yangjie
    Guan, Nan
    Yi, Wang
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datorteknik.
    Multi-feature fusion for thermal face recognition2016Ingår i: Infrared physics & technology, ISSN 1350-4495, E-ISSN 1879-0275, Vol. 77, s. 366-374Artikel i tidskrift (Refereegranskat)
  • 214.
    Biedermann, Daniel
    et al.
    Goethe University, Germany.
    Ochs, Matthias
    Goethe University, Germany.
    Mester, Rudolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Goethe University, Germany.
    Evaluating visual ADAS components on the COnGRATS dataset2016Ingår i: 2016 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), IEEE , 2016, s. 986-991Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a framework that supports the development and evaluation of vision algorithms in the context of driver assistance applications and traffic surveillance. This framework allows the creation of highly realistic image sequences featuring traffic scenarios. The sequences are created with a realistic state of the art vehicle physics model; different kinds of environments are featured, thus providing a wide range of testing scenarios. Due to the physically-based rendering technique and variable camera models employed for the image rendering process, we can simulate different sensor setups and provide appropriate and fully accurate ground truth data.

  • 215.
    Bigun, Josef
    et al.
    Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Gustavsson, TomasChalmers University of Technology, Department of Signals and Systems, Gothenburg, Sweden.
    Image analysis: 13th Scandinavian Conference, SCIA 2003, Halmstad, Sweden, June 29-July 2, 2003, Proceedings2003Proceedings (redaktörskap) (Övrigt vetenskapligt)
    Abstract [en]

    This book constitutes the refeered proceedings of the 13th Scandinavian Conference on Image Analysis, SCIA 2003, held in Halmstad, Sweden in June/July 2003.The 148 revised full papers presented together with 6 invited contributions were carefully reviewed and selected for presentation. The papers are organized in topical sections on feature extraction, depth and surface, shape analysis, coding and representation, motion analysis, medical image processing, color analysis, texture analysis, indexing and categorization, and segmentation and spatial grouping.

  • 216.
    Bigun, Josef
    et al.
    Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Malmqvist, Kerstin
    Proceedings: Symposium on image analysis, Halmstad March 7-8, 20002000Proceedings (redaktörskap) (Övrigt vetenskapligt)
  • 217.
    Bigun, Josef
    et al.
    Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Verikas, AntanasHögskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Laboratoriet för intelligenta system.
    Proceedings SSBA '09: Symposium on Image Analysis, Halmstad University, Halmstad, March 18-20, 20092009Proceedings (redaktörskap) (Övrigt vetenskapligt)
  • 218.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

  • 219.
    Billing, Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

  • 220.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Balkenius, Christian
    Lund University Cognitive Science, Lund, Sweden.
    Modeling the Interplay between Conditioning and Attention in a Humanoid Robot: Habituation and Attentional Blocking2014Ingår i: Proceeding of The 4th International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2014), IEEE conference proceedings, 2014, s. 41-47Konferensbidrag (Refereegranskat)
    Abstract [en]

    A novel model of role of conditioning in attention is presented and evaluated on a Nao humanoid robot. The model implements conditioning and habituation in interaction with a dynamic neural field where different stimuli compete for activation. The model can be seen as a demonstration of how stimulus-selection and action-selection can be combined and illustrates how positive or negative reinforcement have different effects on attention and action. Attention is directed toward both rewarding and punishing stimuli, but appetitive actions are only directed toward positive stimuli. We present experiments where the model is used to control a Nao robot in a task where it can select between two objects. The model demonstrates some emergent effects also observed in similar experiments with humans and animals, including attentional blocking and latent inhibition.

  • 221.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Robot learning from demonstration using predictive sequence learning2011Ingår i: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH, 2011, s. 235-250Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 222.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Sweden.
    Robot learning from demonstration using predictive sequence learning2012Ingår i: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH , 2012, s. 235-250Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 223. Billing, Erik
    et al.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Simultaneous recognition and reproduction of demonstrated behavior2015Ingår i: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, s. 43-53Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: (1) to identify which behavior that best matches current context and (2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction.

  • 224.
    Björk, Ingrid
    et al.
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Språkvetenskapliga fakulteten, Institutionen för lingvistik och filologi.
    Kavathatzopoulos, Iordanis
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Robots, ethics and language2015Ingår i: Computers & Society: The Newsletter of the ACM Special Interest Group on Computers and Society Special Issue on 20 Years of ETHICOMP / [ed] Mark Coeckelbergh, Bernd Stahl, and Catherine Flick; Vaibhav Garg and Dee Weikle, ACM Digital Library, 2015, s. 268-273Konferensbidrag (Refereegranskat)
    Abstract [en]

    Following the classical philosophical definition of ethics and the psychological research on problem solving and decision making, the issue of ethics becomes concrete and opens up the way for the creation of IT systems that can support handling of moral problems. Also in a sense that is similar to the way humans handle their moral problems. The processes of communicating information and receiving instructions are linguistic by nature. Moreover, autonomous and heteronomous ethical thinking is expressed by way of language use. Indeed, the way we think ethically is not only linguistically mediated but linguistically construed – whether we think for example in terms of conviction and certainty (meaning heteronomy) or in terms of questioning and inquiry (meaning autonomy). A thorough analysis of the language that is used in these processes is therefore of vital importance for the development of the above mentioned tools and methods. Given that we have a clear definition based on philosophical theories and on research on human decision-making and linguistics, we can create and apply systems that can handle ethical issues. Such systems will help us to design robots and to prescribe their actions, to communicate and cooperate with them, to control the moral aspects of robots’ actions in real life applications, and to create embedded systems that allow continuous learning and adaptation.

  • 225. Björkman, Eva
    et al.
    Zagal, Juan Cristobal
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Roland, Per E.
    Evaluation of design options for the scale-space primal sketch analysis of brain activation images2000Ingår i: : HBM'00, published in Neuroimage, volume 11, number 5, 2000, 2000, Vol. 11, s. 656-656Konferensbidrag (Refereegranskat)
    Abstract [en]

    A key issue in brain imaging concerns how to detect the functionally activated regions from PET and fMRI images. In earlier work, it has been shown that the scale-space primal sketch provides a useful tool for such analysis [1]. The method includes presmoothing with different filter widths and automatic estimation of the spatial extent of the activated regions (blobs).

    The purpose is to present two modifications of the scale-space primal sketch, as well as a quantitative evaluation which shows that these modifications improve the performance, measured as the separation between blob descriptors extracted from PET images and from noise images. This separation is essential for future work of associating a statistical p-value with the scale-space blob descriptors.

  • 226.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning to Disambiguate Object Hypotheses through Self-Exploration2014Ingår i: 14th IEEE-RAS International Conference onHumanoid Robots, IEEE Computer Society, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic learning framework to form object hypotheses through interaction with the environment. A robot learns how to manipulate objects through pushing actions to identify how many objects are present in the scene. We use a segmentation system that initializes object hypotheses based on RGBD data and adopt a reinforcement approach to learn the relations between pushing actions and their effects on object segmentations. Trained models are used to generate actions that result in minimum number of pushes on object groups, until either object separation events are observed or it is ensured that there is only one object acted on. We provide baseline experiments that show that a policy based on reinforcement learning for action selection results in fewer pushes, than if pushing actions were selected randomly.

  • 227.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Högman, Virgile
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Enhancing Visual Perception of Shape through Tactile Glances2013Ingår i: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, IEEE conference proceedings, 2013, s. 3180-3186Konferensbidrag (Refereegranskat)
    Abstract [en]

    Object shape information is an important parameter in robot grasping tasks. However, it may be difficult to obtain accurate models of novel objects due to incomplete and noisy sensory measurements. In addition, object shape may change due to frequent interaction with the object (cereal boxes, etc). In this paper, we present a probabilistic approach for learning object models based on visual and tactile perception through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape. The robot starts by using only visual features to form an initial hypothesis about the object shape, then gradually adds tactile measurements to refine the object model. Our experiments involve ten objects of varying shapes and sizes in a real setup. The results show that our method is capable of choosing a small number of touches to construct object models similar to real object shapes and to determine similarities among acquired models.

  • 228.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detecting, segmenting and tracking unknown objects using multi-label MRF inference2014Ingår i: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 118, s. 111-127Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article presents a unified framework for detecting, segmenting and tracking unknown objects in everyday scenes, allowing for inspection of object hypotheses during interaction over time. A heterogeneous scene representation is proposed, with background regions modeled as a combinations of planar surfaces and uniform clutter, and foreground objects as 3D ellipsoids. Recent energy minimization methods based on loopy belief propagation, tree-reweighted message passing and graph cuts are studied for the purpose of multi-object segmentation and benchmarked in terms of segmentation quality, as well as computational speed and how easily methods can be adapted for parallel processing. One conclusion is that the choice of energy minimization method is less important than the way scenes are modeled. Proximities are more valuable for segmentation than similarity in colors, while the benefit of 3D information is limited. It is also shown through practical experiments that, with implementations on GPUs, multi-object segmentation and tracking using state-of-art MRF inference methods is feasible, despite the computational costs typically associated with such methods.

  • 229.
    Blom, Fredrik
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Unsupervised Feature Extraction of Clothing Using Deep Convolutional Variational Autoencoders2018Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    I takt med att E-handeln fortsätter att växa och kunderna i ökad utsträckning rör sig online, genereras stora mängder värdefull data, exempelvis transaktions- och sökhistorik, och specifikt för klädeshandeln, välstrukturerade bilder av kläder. Genom att använda oövervakad maskininlärning (unsupervised machine learning) är det möjligt att utnyttja denna, nästan obegränsade mängd data. Detta arbete syftar till att utreda i vilken utsträckning generativa modeller, särskilt djupa självkodande neurala faltningsnätverk (deep convolutional variational autoencoders), kan användas för att automatiskt extrahera definierande drag från bilder av kläder. Genom att granska olika varianter av självkodaren framträder en optimal relation mellan storleken på den latenta vektorn och komplexiteten på den bilddata som nätverket tränades på. Vidare noterades att dragen kan fördeladas unikt på variablerna, i detta fall t-shirts och toppar, genom att vikta den latenta förlustfunktionen.

  • 230.
    Blomgren, Bo
    Uppsala universitet, Medicinska vetenskapsområdet, Medicinska fakulteten, Institutionen för kvinnors och barns hälsa.
    Morphometrical Methodology in Quantification of Biological Tissue Components2004Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Objective:

    To develop and validate computer-assisted morphometrical methods, based on stereological theory, in order to facilitate the analysis and quantitative measurements of biological tissue components.

    Material and methods:

    Biopsy specimens from the vaginal wall or from the vestibulum vaginae of healthy women, or from women suffering from incontinence or vestibulitis were used.

    A number of histochemical methods for light microscopy were used, and modified for the different morphometrical analyses. Electron microscopy was used to reveal collagen fibre diameter.

    Computer-assisted morphometry, based on image analysis and stereology, was employed to analyse the different tissue components in the biopsies. Computer programs for these purposes were developed and validated.

    Results:

    The results show that computer-assisted morphometry is of great value for quantitative measurements of the following tissue components:

    Epithelium: The epithelial structure, instead of just thickness, was measured in an unbiased way.

    Collagen: The collagen fibril diameter was determined in electron microscopic specimens, and the collagen content was analysed in light microscopic specimens.

    Elastic fibres: The amount of elastic fibres in the connective tissue was measured after visualisation by autofluorescence.

    Vasculature: A stereological method using a cycloid grid was implemented in a computer program. Healthy subjects were compared with patients suffering from vestibulitis. The results were identical in the two groups.

    Smooth muscle: A stereological method using a point grid was implemented in a computer program. Using the Delesse principle, the fibres were calculated as area fractions. The area fractions were highly variable among the different specimens.

    Conclusion:

    Morphometry, used correctly, is an important analysis method in histopathological research. It is important that the methods are as simple and user-friendly as possible. The present studies show that this methodology can be applied for most quantitative histological analyses.

  • 231.
    Bock, Alexander
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Pembroke, Asher
    NASA Goddard Space Flight Center, USA.
    Mays, M. Leila
    NASA Goddard Space Flight Center, USA.
    Rastaetter, Lutz
    NASA Goddard Space Flight Center, USA.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Ropinski, Timo
    Ulm University, Germany.
    Visual Verification of Space Weather Ensemble Simulations2015Ingår i: 2015 IEEE Scientific Visualization Conference (SciVis), IEEE, 2015, s. 17-24Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel

  • 232.
    Bock, Alexander
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Pembroke, Asher
    NASA Goddard Space Flight Center, Greenbelt, MD, United States.
    Mays, M. Leila
    Catholic University of America, Washington, DC, United States.
    Ynnerman, Anders
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    OpenSpace: An Open-Source Framework for Data Visualization and Contextualization2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an open-source software development effort called OpenSpace that is tailored for the dissemination of space-related data visualization. In the current stages of the project, we have focussed on the public dissemination of space missions (Rosetta and New Horizons) as well as the support of space weather forecasting. The presented work will focus on the latter of these foci and elaborate on the efforts that have gone into developing a system that allows the user to assess the accuracy and validity of ENLIL ensemble simulations. It becomes possible to compare the results of ENLIL CME simulations with STEREO and SOHO images using an optical flow algorithm. This allows the user to compare velocities in the volumetric rendering of ENLIL data with the movement of CMEs through the field-of-views of various instruments onboard the space craft. By allowing the user access to these comparisons, new information about the time evolution of CMEs through the interplanetary medium is possible. Additionally, contextualizing this information in three-dimensional rendering scene, allows the analyst and the public to disseminate this data. This dissemination is further improved by the ability to connect multiple instances of the software and, thus, reach a broader audience. In a second step, we plan to combine the two foci of the project to enable the visualization of the SWAP instrument onboard New Horizons in context with a far-reaching ENLIL simulation, thus providing additional information about the solar wind dynamics of the outer solar system. The initial work regarding this plan will be presented.

  • 233.
    Bohg, Jeannette
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Acting and Interacting in the Real World2011Konferensbidrag (Refereegranskat)
  • 234.
    Bolin, Karl
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Helgesson, Johan
    Automatic image analysis of concrete cracks2003Rapport (Övrigt vetenskapligt)
    Abstract [en]

    The objective of the work is to investigate if image analysis methods

  • 235. Bontsema, Jan
    et al.
    Hemming, Jochen
    Pekkeriet, Erik
    Saeys, Wouter
    Edan, Yael
    Shapiro, Amir
    Hočevar, Marko
    Oberti, Roberto
    Armada, Manuel
    Ulbrich, Heinz
    Baur, Jörg
    Debilde, Benoit
    Best, Stanley
    Evain, Sébastien
    Gauchel, Wolfgang
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ringdahl, Ola
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    CROPS: Clever Robots for Crops2015Ingår i: Engineering & Technology Reference, ISSN 2056-4007, Vol. 1, nr 1Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In the EU-funded CROPS project robots are developed for site-specific spraying and selective harvesting of fruit and fruit vegetables. The robots are being designed to harvest crops, such as greenhouse vegetables, apples, grapes and for canopy spraying in orchards and for precision target spraying in grape vines. Attention is paid to the detection of obstacles for autonomous navigation in a safe way in plantations and forests. For the different applications, platforms were built. Sensing systems and vision algorithms have been developed. For software the Robot Operating System is used. A 9 degrees of freedom manipulator was designed and tested for sweet-pepper harvesting, apple harvesting and in close range spraying. For the applications different end-effectors were designed and tested. For sweet pepper a platform that can move in between the crop rows on the common greenhouse rail system which also serves as heating pipes was built. The apple harvesting platform is based on a current mechanical grape harvester. In discussion with growers so-called ‘walls of fruit trees’ have been designed which bring robots closer to the practice. A canopy-optimised sprayer has been designed as a trailed sprayer with a centrifugal blower. All the applications have been tested under practical conditions.

  • 236.
    Bore, Nils
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Efficient retrieval of arbitrary objects from long-term robot observations2017Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, s. 139-150Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

  • 237.
    Borg, Johan
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Detecting and Tracking Players in Football Using Stereo Vision2007Självständigt arbete på avancerad nivå (magisterexamen), 20 poäng / 30 hpStudentuppsats
    Abstract [en]

    The objective of this thesis is to investigate if it is possible to use stereo vision to find and track the players and the ball during a football game.

    The thesis shows that it is possible to detect all players that isn’t too occluded by another player. Situations when a player is occluded by another player is solved by tracking the players from frame to frame.

    The ball is also detected in most frames by looking for ball-like features. As with the players the ball is tracked from frame to frame so that when the ball is occluded, the positions is estimated by the tracker.

  • 238.
    Borga, Magnus
    et al.
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Linköpings universitet, Tekniska högskolan.
    Rydell, Joakim
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan.
    Signal and Anatomical Constraints in Adaptive Filtering of fMRI Data2007Ingår i: Biomedical Imaging: From Nano to Macro, 2007. ISBI 2007: From Nano to Macro, IEEE , 2007, s. 432-435Konferensbidrag (Refereegranskat)
    Abstract [en]

    An adaptive filtering method for fMRI data is presented. The method is related to bilateral filtering, but with a range filter that takes into account local similarities in signal as well as in anatomy. Performance is demonstrated on simulated and real data. It is shown that using both these similarity constraints give better performance than if only one of them is used, and clearly better than standard low-pass filtering.

  • 239.
    Borgefors, G.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Some weighted distance transforms in four dimensions2000Konferensbidrag (Refereegranskat)
    Abstract [en]

    In a digital distance transform, each picture element in the shape (background)

  • 240.
    Borgefors, G.
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Nyström, I.
    Sanniti di Baja, G.
    Svensson, S.
    Simplification of 3D skeletons using distance information2000Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a method to simplify the structure of the surface skeleton of a 3D

  • 241.
    Borgefors, G.
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Ramella, G.
    Sanniti di Baja, G.
    Hierarchical Decomposition of Multi-Scale Skeletons2001Ingår i: IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 23, nr 11, s. 1296-1312Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a new procedure to hierarchically decompose a multi-scale discrete skeleton. The

  • 242.
    Borgefors, G
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Ramella, G
    Sanniti, di Baja G
    Shape and topology preserving multi-valued image pyramids for multi-resolution skeletonization2001Ingår i: PATTERN RECOGNITION LETTERS, ISSN 0167-8655, Vol. 22, nr 6-7, s. 741-751Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Starting from a binary digital image, a multi-valued pyramid is built and suitably treated, so that shape and topology properties of the pattern are preserved satisfactorily at all resolution levels. The multi-valued pyramid can then be used as input data

  • 243.
    Borgefors, G.
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Svensson, S.
    Optimal Local Distances for Distance Transforms in 3D using an ExtendedNeighbourhood2001Konferensbidrag (Refereegranskat)
    Abstract [en]

    Digital distance transforms are useful tools for many image analysis tasks. In the

  • 244.
    Borgefors, Gunilla
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Kedjekod - ett sätt att beskriva former i digitala bilder2005Ingår i: Problemlösning är # 1, Liber, Stockholm , 2005, s. 38-42Kapitel i bok, del av antologi (Övrigt vetenskapligt)
  • 245.
    Borgefors, Gunilla
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Räta linjer på dataskärmen: En illustration av rekursivitet2008Ingår i: Nämnaren, ISSN 0348-2723, Vol. 35, nr 1, s. 46-50Artikel i tidskrift (Övrig (populärvetenskap, debatt, mm))
  • 246.
    Borgefors, Gunilla
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Tessellationer i matematik, arkitektur och konst2004Ingår i: Matenmatikbiennalen 2004: Malmö, 22-24 jan. 2004, 2004, s. 4-Konferensbidrag (Övrig (populärvetenskap, debatt, mm))
  • 247.
    Borgefors, Gunilla
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Tessellationer: konsten att dela upp planet i regelbundna mönster2008Ingår i: Människor och matematik: Läsebok för nyfikna, Göteborg: NCM , 2008, s. 185-210Kapitel i bok, del av antologi (Övrig (populärvetenskap, debatt, mm))
  • 248.
    Borgefors, Gunilla
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    The Scarcity of Universal Colour Names2018Ingår i: Proceedings of 7th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2018) / [ed] Maria de Marisco, Gabriella Sannniti di Baja, Ana Fred, SciTePress, 2018, s. 496-502Konferensbidrag (Refereegranskat)
    Abstract [en]

    There is a trend in Computer Vision to use over twenty colour names for image annotation, retrieval and to train deep learning networks to name unknown colours for human use. This paper will show that there is little consistency of colour naming between languages and even between individuals speaking the same language. Experiments will be cited that show that your mother tongue influences how your brain processes colour. It will also be pointed out that the eleven so called basic colours in English are not universal and cannot be applied to other languages. The conclusion is that only the six Hering primary colours, possibly with simple qualifications, are the only ones you should use if you aim for universal usage of your systems. That is: black, white, red, green, blue, and yellow.

  • 249.
    Borgefors, Gunilla
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Weighted digital distance transforms in four dimensions2003Ingår i: Discrete Applied Mathematics, Vol. 125, s. 161-176Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A digital distance transformation converts a binary image in Z^n to a distance transform, where each picture element in the foreground (background) has a value measuring the closest distance to the background (foreground). In a weighted distance transform

  • 250.
    Borgefors, Gunilla
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Weighted distance transforms in four dimensions2003Ingår i: Discrete Applied Mathematics, Vol. 125, s. 161-176Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A digital distance transformation converts a binary image in Z^n to a distance transform, where each picture element in the foreground (background) has a value measuring the closest distance to the background (foreground). In a weighted distance transform

2345678 201 - 250 av 1863
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf