Endre søk
Begrens søket
45678910 301 - 350 of 1863
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 301.
    Butepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kjellström, Hedvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Anticipating many futures: Online human motion prediction and generation for human-robot interaction2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE COMPUTER SOC , 2018, s. 4563-4570Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.

  • 302.
    Buttar, Sarpreet Singh
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för datavetenskap och medieteknik (DM).
    Applying Artificial Neural Networks to Reduce the Adaptation Space in Self-Adaptive Systems: an exploratory work2019Independent thesis Advanced level (degree of Master (One Year)), 10 poäng / 15 hpOppgave
    Abstract [en]

    Self-adaptive systems have limited time to adjust their configurations whenever their adaptation goals, i.e., quality requirements, are violated due to some runtime uncertainties. Within the available time, they need to analyze their adaptation space, i.e., a set of configurations, to find the best adaptation option, i.e., configuration, that can achieve their adaptation goals. Existing formal analysis approaches find the best adaptation option by analyzing the entire adaptation space. However, exhaustive analysis requires time and resources and is therefore only efficient when the adaptation space is small. The size of the adaptation space is often in hundreds or thousands, which makes formal analysis approaches inefficient in large-scale self-adaptive systems. In this thesis, we tackle this problem by presenting an online learning approach that enables formal analysis approaches to analyze large adaptation spaces efficiently. The approach integrates with the standard feedback loop and reduces the adaptation space to a subset of adaptation options that are relevant to the current runtime uncertainties. The subset is then analyzed by the formal analysis approaches, which allows them to complete the analysis faster and efficiently within the available time. We evaluate our approach on two different instances of an Internet of Things application. The evaluation shows that our approach dramatically reduces the adaptation space and analysis time without compromising the adaptation goals.

  • 303.
    Byström, Anna
    et al.
    Swedish University of Agricultural Sciences, Department of Anatomy, Physiology and Biochemistry.
    Roepstorff, Lars
    Swedish University of Agricultural Sciences, Department of Anatomy, Physiology and Biochemistry.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Image Analysis of Saddle Pressure Data2011Konferansepaper (Annet vitenskapelig)
  • 304.
    Bäck, David
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan.
    Neural Network Gaze Tracking using Web Camera2006Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Gaze tracking means to detect and follow the direction in which a person looks. This can be used in for instance human-computer interaction. Most existing systems illuminate the eye with IR-light, possibly damaging the eye. The motivation of this thesis is to develop a truly non-intrusive gaze tracking system, using only a digital camera, e.g. a web camera.

    The approach is to detect and track different facial features, using varying image analysis techniques. These features will serve as inputs to a neural net, which will be trained with a set of predetermined gaze tracking series. The output is coordinates on the screen.

    The evaluation is done with a measure of accuracy and the result is an average angular deviation of two to four degrees, depending on the quality of the image sequence. To get better and more robust results, a higher image quality from the digital camera is needed.

  • 305.
    Bäckström, Nils
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Designing a Lightweight Convolutional Neural Network for Onion and Weed Classification2018Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The data set for this project consists of images containing onion and weed samples. It is of interest to investigate if Convolutional Neural Networks can learn to classify the crops correctly as a step in automatizing weed removal in farming. The aim of this project is to solve a classification task involving few classes with relatively few training samples (few hundred per class). Usually, small data sets are prone to overfitting, meaning that the networks generalize bad to unseen data. It is also of interest to solve the problem using small networks with low computational complexity, since inference speed is important and memory often is limited on deployable systems. This work shows how transfer learning, network pruning and quantization can be used to create lightweight networks whose classification accuracy exceeds the same architecture trained from scratch. Using these techniques, a SqueezeNet v1.1 architecture (which is already a relatively small network) can reach 1/10th of the original model size and less than half MAC operations during inference, while still maintaining a higher classification accuracy compared to a SqueezeNet v1.1 trained from scratch (96.9±1.35% vs 92.0±3.11% on 5-fold cross validation)

  • 306.
    Båberg, Fredrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Petter, Ögren
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma systen, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017Inngår i: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, artikkel-id 8088131Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 307.
    Börlin, Niclas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Bundle adjustment with and without damping2013Inngår i: Photogrammetric Record, ISSN 0031-868X, E-ISSN 1477-9730, Vol. 28, nr 144, s. 396-415Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The least squares adjustment (LSA) method is studied as an optimisation problem and shown to be equivalent to the undamped Gauss-Newton (GN) optimisation method. Three problem-independent damping modifications of the GN method are presented: the line-search method of Armijo (GNA); the Levenberg-Marquardt algorithm (LM); and Levenberg-Marquardt-Powell (LMP). Furthermore, an additional problem-specific "veto" damping technique, based on the chirality condition, is suggested. In a perturbation study on a terrestrial bundle adjustment problem the GNA and LMP methods with veto damping can increase the size of the pull-in region compared to the undamped method; the LM method showed less improvement. The results suggest that damped methods can, in many cases, provide a solution where undamped methods fail and should be available in any LSA software package. Matlab code for the algorithms discussed is available from the authors.

  • 308.
    Börlin, Niclas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Camera Calibration using the Damped Bundle Adjustment Toolbox2014Inngår i: ISPRS Annals - Volume II-5, 2014: ISPRS Technical Commission V Symposium 23–25 June 2014, Riva del Garda, Italy / [ed] F. Remondino and F. Menna, Copernicus GmbH , 2014, Vol. II-5, s. 89-96Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Camera calibration is one of the fundamental photogrammetric tasks. The standard procedure is to apply an iterative adjustment to measurements of known control points. The iterative adjustment needs initial values of internal and external parameters. In this paper we investigate a procedure where only one parameter - the focal length is given a specific initial value. The procedure is validated using the freely available Damped Bundle Adjustment Toolbox on five calibration data sets using varying narrow- and wide-angle lenses. The results show that the Gauss-Newton-Armijo and Levenberg-Marquardt-Powell bundle adjustment methods implemented in the toolbox converge even if the initial values of the focal length are between 1/2 and 32 times the true focal length, even if the parameters are highly correlated. Standard statistical analysis methods in the toolbox enable manual selection of the lens distortion parameters to estimate, something not available in other camera calibration toolboxes. A standardised camera calibration procedure that does not require any information about the camera sensor or focal length is suggested based on the convergence results. The toolbox source and data sets used in this paper are available from the authors.

  • 309.
    Börlin, Niclas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Experiments with Metadata-derived Initial Values and Linesearch Bundle Adjustment in Architectural Photogrammetry2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    According to the Waldhäusl and Ogleby (1994) "3x3 rules", a well-designed close-range architetural photogrammetric project should include a sketch of the project site with the approximate position and viewing direction of each image. This orientation metadata is important to determine which part of the object each image covers. In principle, the metadata could be used as initial values for the camera external orientation (EO) parameters. However, this has rarely been used, partly due to convergence problem for the bundle adjustment procedure.

    In this paper we present a photogrammetric reconstruction pipeline based on classical methods and investigate if and how the linesearch bundle algorithms of Börlin et al. (2004) and/or metadata can be used to aid the reconstruction process in architectural photogrammetry when the classical methods fail. The primary initial values for the bundle are calculated by the five-point algorithm by Nistér (Stewénius et al., 2006). Should the bundle fail, initial values derived from metadata are calculated and used for a second bundle attempt.

    The pipeline was evaluated on an image set of the INSA building in Strasbourg. The data set includes mixed convex and non-convex subnetworks and a combination of manual and automatic measurements.

    The results show that, in general, the classical bundle algorithm with five-point initial values worked well. However, in cases where it did fail, linesearch bundle and/or metadata initial values did help. The presented approach is interesting for solving EO problems when the automatic orientation processes fail as well as to simplify keeping a link between the metadata containing the plan of how the project should have become and the actual reconstructed network as it turned out to be.

  • 310.
    Caccamo, Sergio
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Güler, Püren
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics2016Inngår i: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, s. 530-537Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Exploring and modeling heterogeneous elastic surfaces requires multiple interactions with the environment and a complex selection of physical material parameters. The most common approaches model deformable properties from sets of offline observations using computationally expensive force-based simulators. In this work we present an online probabilistic framework for autonomous estimation of a deformability distribution map of heterogeneous elastic surfaces from few physical interactions. The method takes advantage of Gaussian Processes for constructing a model of the environment geometry surrounding a robot. A fast Position-based Dynamics simulator uses focused environmental observations in order to model the elastic behavior of portions of the environment. Gaussian Process Regression maps the local deformability on the whole environment in order to generate a deformability distribution map. We show experimental results using a PrimeSense camera, a Kinova Jaco2 robotic arm and an Optoforce sensor on different deformable surfaces.

  • 311.
    Caccamo, Sergio
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Parasuraman, Ramviyas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Båberg, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Extending a UGV Teleoperation FLC Interface with Wireless Network Connectivity Information2015Inngår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, s. 4305-4312Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Teleoperated Unmanned Ground Vehicles (UGVs) are expected to play an important role in future search and rescue operations. In such tasks, two factors are crucial for a successful mission completion: operator situational awareness and robust network connectivity between operator and UGV. In this paper, we address both these factors by extending a new Free Look Control (FLC) operator interface with a graphical representation of the Radio Signal Strength (RSS) gradient at the UGV location. We also provide a new way of estimating this gradient using multiple receivers with directional antennas. The proposed approach allows the operator to stay focused on the video stream providing the crucial situational awareness, while controlling the UGV to complete the mission without moving into areas with dangerously low wireless connectivity. The approach is implemented on a KUKA youBot using commercial-off-the-shelf components. We provide experimental results showing how the proposed RSS gradient estimation method performs better than a difference approximation using omnidirectional antennas and verify that it is indeed useful for predicting the RSS development along a UGV trajectory. We also evaluate the proposed combined approach in terms of accuracy, precision, sensitivity and specificity.

  • 312.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, U.K..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2019Inngår i: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 19, nr 4, s. 1508-1518Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 313.
    Cammoun, Leila
    et al.
    Signal Processing Institute Ecole Polytechnique Fédérale de, Lausanne, Switzerland.
    Castaño-Moraga, Carlos Alberto
    Department of Signals and Communciations, University of Las Palmas de Gran Canaria, Spain.
    Muñoz-Moreno, Emma
    Univ. de Valladolid, Spain.
    Sosa-Cabrera, Dario
    Canary Islands Institute of Technology, Spain.
    Acar, Burak
    Electrical-Electronics Eng. Dept, Bogazici University, Istanbul, Turkey.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Knutsson, Hans
    Dept. of medical engineering, Linköpings universitet.
    Thiran, Jean-Philippe
    Signal Processing Institute Ecole Polytechnique Fédérale de, Lausanne, Switzerland.
    A Review of Tensors and Tensor Signal Processing2009Inngår i: Tensors in Image Processing and Computer Vision / [ed] Santiago Aja-Fernandez, Rodrigo de Luis Garcia, Dacheng Tao, Xuelong Li, London: Springer , 2009, 1, s. 1-32Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Tensors have been broadly used in mathematics and physics, since they are a generalization of scalars or vectors and allow to represent more complex properties. In this chapter we present an overview of some tensor applications, especially those focused on the image processing field. From a mathematical point of view, a lot of work has been developed about tensor calculus, which obviously is more complex than scalar or vectorial calculus. Moreover, tensors can represent the metric of a vector space, which is very useful in the field of differential geometry. In physics, tensors have been used to describe several magnitudes, such as the strain or stress of materials. In solid mechanics, tensors are used to define the generalized Hooke’s law, where a fourth order tensor relates the strain and stress tensors. In fluid dynamics, the velocity gradient tensor provides information about the vorticity and the strain of the fluids. Also an electromagnetic tensor is defined, that simplifies the notation of the Maxwell equations. But tensors are not constrained to physics and mathematics. They have been used, for instance, in medical imaging, where we can highlight two applications: the diffusion tensor image, which represents how molecules diffuse inside the tissues and is broadly used for brain imaging; and the tensorial elastography, which computes the strain and vorticity tensor to analyze the tissues properties. Tensors have also been used in computer vision to provide information about the local structure or to define anisotropic image filters.

  • 314.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017Inngår i: Robotics, E-ISSN 2218-6581, Vol. 6, nr 3, artikkel-id 15Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 315.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, nr 2, s. 1148-1155Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

  • 316.
    Carlsson, Mattias
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Neural Networks for Semantic Segmentation in the Food Packaging Industry2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Industrial applications of computer vision often utilize traditional image processing techniques whereas state-of-the-art methods in most image processing challenges are almost exclusively based on convolutional neural networks (CNNs). Thus there is a large potential for improving the performance of many machine vision applications by incorporating CNNs.

    One such application is the classification of juice boxes with straws, where the baseline solution uses classical image processing techniques on depth images to reject or accept juice boxes. This thesis aim to investigate how CNNs perform on the task of semantic segmentation (pixel-wise classification) of said images and if the result can be used to increase classification performance.

    A drawback of CNNs is that they usually require large amounts of labelled data for training to be able to generalize and learn anything useful. As labelled data is hard to come by, two ways to get cheap data are investigated, one being synthetic data generation and the other being automatic labelling using the baseline solution.

    The implemented network performs well on semantic segmentation, even when trained on synthetic data only, though the performance increases with the ratio of real (automatically labelled) to synthetic images. The classification task is very sensitive to small errors in semantic segmentation and the results are therefore not as good as the baseline solution. It is suspected that the drop in performance between validation and test data is due to a domain shift between the data sets, e.g. variations in data collection and straw and box type, and fine-tuning to the target domain could definitely increase performance.

    When trained on synthetic data the domain shift is even larger and the performance on classification is next to useless. It is likely that the results could be improved by using more advanced data generation, e.g. a generative adversarial network (GAN), or more rigorous modelling of the data.

  • 317.
    Carlsson, Stefan
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Azizpour, Hossein
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Razavian, Ali Sharif
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Sullivan, Josephine
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Smith, Kevin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    The Preimage of Rectifier Network Activities2017Inngår i: International Conference on Learning Representations (ICLR), 2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

  • 318. Carlsson, Stefan
    et al.
    Azizpour, Hossein
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sharif Razavian, Ali
    Sullivan, Josephine
    Smith, Kevin
    The preimage of rectifier network activities2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We give a procedure for explicitly computing the complete preimage of activities of a layer in a rectifier network with fully connected layers, from knowledge of the weights in the network. The most general characterisation of preimages is as piecewise linear manifolds in the input space with possibly multiple branches. This work therefore complements previous demonstrations of preimages obtained by heuristic optimisation and regularization algorithms Mahendran & Vedaldi (2015; 2016) We are presently empirically evaluating the procedure and it’s ability to extract complete preimages as well as the general structure of preimage manifolds.

  • 319.
    Carvalho, Joao Frederico
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Vejdemo-Johansson, Mikael
    CUNY College of Staten Island, Mathematics Department, New York, USA.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Pokorny, Florian T.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Path Clustering with Homology Area2018Inngår i: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2018, s. 7346-7353Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Path clustering has found many applications in recent years. Common approaches to this problem use aggregates of the distances between points to provide a measure of dissimilarity between paths which do not satisfy the triangle inequality. Furthermore, they do not take into account the topology of the space where the paths are embedded. To tackle this, we extend previous work in path clustering with relative homology, by employing minimum homology area as a measure of distance between homologous paths in a triangulated mesh. Further, we show that the resulting distance satisfies the triangle inequality, and how we can exploit the properties of homology to reduce the amount of pairwise distance calculations necessary to cluster a set of paths. We further compare the output of our algorithm with that of DTW on a toy dataset of paths, as well as on a dataset of real-world paths.

  • 320.
    Castellano, Ginevra
    et al.
    InfoMus Lab, DIST, University of Genova.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Camurri, Antonio
    InfoMus Lab, DIST, University of Genova.
    Volpe, Gualtiero
    InfoMus Lab, DIST, University of Genova.
    Expressive Control of Music and Visual Media by Full-Body Movement2007Inngår i: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME '07, New York, NY, USA: ACM Press, 2007, s. 390-391Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we describe a system which allows users to use their full-body for controlling in real-time the generation of an expressive audio-visual feedback. The system extracts expressive motion features from the user’s full-body movements and gestures. The values of these motion features are mapped both onto acoustic parameters for the real-time expressive rendering ofa piece of music, and onto real-time generated visual feedback projected on a screen in front of the user.

  • 321. Castellano, Ginevra
    et al.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Camurri, Antonio
    Volpe, Gualtiero
    User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements2007Inngår i: Affective Computing and Intelligent Interaction / [ed] Paiva, Ana; Prada, Rui; Picard, Rosalind W., Berlin / Heidelberg: Springer Berlin/Heidelberg, 2007, s. 501-510Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user’s full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.

  • 322.
    Castellano, Ginevra
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Leite, Iolanda
    Univ Tecn Lisboa, INESC ID, Oporto, Portugal.; Univ Tecn Lisboa, Inst Super Tecn, Oporto, Portugal..
    Paiva, Ana
    Univ Tecn Lisboa, INESC ID, Oporto, Portugal.; Univ Tecn Lisboa, Inst Super Tecn, Oporto, Portugal..
    Detecting perceived quality of interaction with a robot using contextual features2017Inngår i: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 41, nr 5, s. 1245-1261Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This work aims to advance the state of the art in exploring the role of task, social context and their interdependencies in the automatic prediction of affective and social dimensions in human-robot interaction. We explored several SVMs-based models with different features extracted from a set of context logs collected in a human-robot interaction experiment where children play a chess game with a social robot. The features include information about the game and the social context at the interaction level (overall features) and at the game turn level (turn-based features). While overall features capture game and social context at the interaction level, turn-based features attempt to encode the dependencies of game and social context at each turn of the game. Results showed that game and social context-based features can be successfully used to predict dimensions of quality of interaction with the robot. In particular, overall features proved to perform equally or better than turn-based features, and game context-based features more effective than social context-based features. Our results show that the interplay between game and social context-based features, combined with features encoding their dependencies, lead to higher recognition performances for a subset of dimensions.

  • 323.
    Ceco, Ema
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Image Analysis in the Field of Oil Contamination Monitoring2011Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Monitoring wear particles in lubricating oils allows specialists to evaluate thehealth and functionality of a mechanical system. The main analysis techniquesavailable today are manual particle analysis and automatic optical analysis. Man-ual particle analysis is effective and reliable since the analyst continuously seeswhat is being counted . The drawback is that the technique is quite time demand-ing and dependent of the skills of the analyst. Automatic optical particle countingconstitutes of a closed system not allowing for the objects counted to be observedin real-time. This has resulted in a number of sources of error for the instrument.In this thesis a new method for counting particles based on light microscopywith image analysis is proposed. It has proven to be a fast and effective methodthat eliminates the sources of error of the previously described methods. Thenew method correlates very well with manual analysis which is used as a refer-ence method throughout this study. Size estimation of particles and detectionof metallic particles has also shown to be possible with the current image analy-sis setup. With more advanced software and analysis instrumentation, the imageanalysis method could be further developed to a decision based machine allowingfor declarations about which wear mode is occurring in a mechanical system.

  • 324.
    Cedernaes, Erasmus
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Runway detection in LWIR video: Real time image processing and presentation of sensor data2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration.

    The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform.

    A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.

  • 325.
    Chang, Tsung-Yao
    et al.
    Massachusetts Institute of Technology, USA.
    Pardo-Martin, Carlos
    Massachusetts Institute of Technology, USA.
    Allalou, Amin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab.
    Wählby, Carolina
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion. Uppsala universitet, Science for Life Laboratory, SciLifeLab.
    Yanik, Mehmet Fatih
    Massachusetts Institute of Technology, USA.
    Fully automated cellular-resolution vertebrate screening platform with parallel animal processing2012Inngår i: Lab on a Chip, ISSN 1473-0197, E-ISSN 1473-0189, Vol. 12, nr 4, s. 711-716Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The zebrafish larva is an optically-transparent vertebrate model with complex organs that is widelyused to study genetics, developmental biology, and to model various human diseases. In this article, wepresent a set of novel technologies that significantly increase the throughput and capabilities of ourpreviously described vertebrate automated screening technology (VAST). We developed a robustmulti-thread system that can simultaneously process multiple animals. System throughput is limitedonly by the image acquisition speed rather than by the fluidic or mechanical processes. We developedimage recognition algorithms that fully automate manipulation of animals, including orienting andpositioning regions of interest within the microscope’s field of view. We also identified the optimalcapillary materials for high-resolution, distortion-free, low-background imaging of zebrafish larvae.

  • 326.
    Chang, Yongjun
    et al.
    KTH, Skolan för teknik och hälsa (STH).
    Smedby, Örjan
    KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Medicinsk bildbehandling och visualisering.
    Effects of preprocessing in slice-level classification of interstitial lung disease based on deep convolutional networks2018Inngår i: VipIMAGE 2017: Proceedings of the VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing Porto, Portugal, October 18-20, 2017, Springer Netherlands, 2018, Vol. 27, s. 624-629Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Several preprocessing methods are applied to the automatic classification of interstitial lung disease (ILD). The proposed methods are used for the inputs to an established convolutional neural network in order to investigate the effect of those preprocessing techniques to slice-level classification accuracy. Experimental results demonstrate that the proposed preprocessing methods and a deep learning approach outperformed the case of the original images input to deep learning without preprocessing.

  • 327.
    Chanussot, Jocelyn
    et al.
    Signal and Image Laboratory (LIS, Grenoble).
    Nyström, Ingela
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Sladoje, Natasa
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Shape signaturs of fuzzy star-shaped sets based on distance from the centroid2005Inngår i: Pattern Recognition Letters, ISSN 0167-8655, Vol. 26, nr 6, s. 735-746Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We extend the shape signature based on the distance of the boundary points from the shape centroid, to the case of fuzzy sets. The analysis of the transition from crisp to fuzzy shape descriptor is first given in the continuous case. This is followed by a study of the specific issues induced by the discrete representation of the objects in a computer.

    We analyze two methods for calculating the signature of a fuzzy shape, derived from two ways of defining a fuzzy set: first, by its membership function, and second, as a stack of its α-cuts. The first approach is based on measuring the length of a fuzzy straight line by integration of the fuzzy membership function, while in the second one we use averaging of the shape signatures obtained for the individual α-cuts of the fuzzy set. The two methods, equivalent in the continuous case for the studied class of fuzzy shapes, produce different results when adjusted to the discrete case. A statistical study, aiming at characterizing the performances of each method in the discrete case, is done. Both methods are shown to provide more precise descriptions than their corresponding crisp versions. The second method (based on averaged Euclidean distance over the α-cuts) outperforms the others.

  • 328.
    Cheddad, Abbas
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för datalogi och datorsystemteknik.
    Structure Preserving Binary Image Morphing using Delaunay Triangulation2017Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 85, s. 8-14Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Mathematical morphology has been of a great significance to several scientific fields. Dilation, as one of the fundamental operations, has been very much reliant on the common methods based on the set theory and on using specific shaped structuring elements to morph binary blobs. We hypothesised that by performing morphological dilation while exploiting geometry relationship between dot patterns, one can gain some advantages. The Delaunay triangulation was our choice to examine the feasibility of such hypothesis due to its favourable geometric properties. We compared our proposed algorithm to existing methods and it becomes apparent that Delaunay based dilation has the potential to emerge as a powerful tool in preserving objects structure and elucidating the influence of noise. Additionally, defining a structuring element is no longer needed in the proposed method and the dilation is adaptive to the topology of the dot patterns. We assessed the property of object structure preservation by using common measurement metrics. We also demonstrated such property through handwritten digit classification using HOG descriptors extracted from dilated images of different approaches and trained using Support Vector Machines. The confusion matrix shows that our algorithm has the best accuracy estimate in 80% of the cases. In both experiments, our approach shows a consistent improved performance over other methods which advocates for the suitability of the proposed method.

  • 329.
    Chen, Guang
    et al.
    Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China.;Tech Univ Munich, Chair Robot Artificial Intelligence & Real Time S, Munich, Germany..
    Chen, Jieneng
    Tongji Univ, Coll Elect & Informat Engn, Shanghai, Peoples R China..
    Lienen, Marten
    Tech Univ Munich, Chair Robot Artificial Intelligence & Real Time S, Munich, Germany..
    Conradt, Jörg
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    Roehrbein, Florian
    Tech Univ Munich, Chair Robot Artificial Intelligence & Real Time S, Munich, Germany..
    Knoll, Alois C.
    Tech Univ Munich, Chair Robot Artificial Intelligence & Real Time S, Munich, Germany..
    FLGR: Fixed Length Gists Representation Learning for RNN-HMM Hybrid-Based Neuromorphic Continuous Gesture Recognition2019Inngår i: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 13, artikkel-id 73Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A neuromorphic vision sensors is a novel passive sensing modality and frameless sensors with several advantages over conventional cameras. Frame-based cameras have an average frame-rate of 30 fps, causing motion blur when capturing fast motion, e.g., hand gesture. Rather than wastefully sending entire images at a fixed frame rate, neuromorphic vision sensors only transmit the local pixel-level changes induced by the movement in a scene when they occur. This leads to advantageous characteristics, including low energy consumption, high dynamic range, a sparse event stream and low response latency. In this study, a novel representation learning method was proposed: Fixed Length Gists Representation (FLGR) learning for event-based gesture recognition. Previous methods accumulate events into video frames in a time duration (e.g., 30 ms) to make the accumulated image-level representation. However, the accumulated-frame-based representation waives the friendly event-driven paradigm of neuromorphic vision sensor. New representation are urgently needed to fill the gap in non-accumulated-frame-based representation and exploit the further capabilities of neuromorphic vision. The proposed FLGR is a sequence learned from mixture density autoencoder and preserves the nature of event-based data better. FLGR has a data format of fixed length, and it is easy to feed to sequence classifier. Moreover, an RNN-HMM hybrid was proposed to address the continuous gesture recognition problem. Recurrent neural network (RNN) was applied for FLGR sequence classification while hidden Markov model (HMM) is employed for localizing the candidate gesture and improving the result in a continuous sequence. A neuromorphic continuous hand gestures dataset (Neuro ConGD Dataset) was developed with 17 hand gestures classes for the community of the neuromorphic research. Hopefully, FLGR can inspire the study on the event-based highly efficient, high-speed, and high-dynamic-range sequence classification tasks.

  • 330.
    Cheng, Xiaogang
    et al.
    Nanjing Univ Posts & Telecommun, Coll Telecommun & Informat Engn, Nanjing 210003, Jiangsu, Peoples R China.;Swiss Fed Inst Technol, Comp Vis Lab, CH-8092 Zurich, Switzerland..
    Yang, Bin
    Xian Univ Architecture & Technol, Sch Bldg Serv Sci & Engn, Xian 710055, Shaanxi, Peoples R China.;Umea Univ, Dept Appl Phys & Elect, S-90187 Umea, Sweden..
    Tan, Kaige
    KTH.
    Isaksson, Erik
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Li, Liren
    Nanjing Tech Univ, Sch Comp Sci & Technol, Nanjing 211816, Jiangsu, Peoples R China..
    Hedman, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Olofsson, Thomas
    Umea Univ, Dept Appl Phys & Elect, S-90187 Umea, Sweden..
    Li, Haibo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID. Nanjing Univ Posts & Telecommun, Coll Telecommun & Informat Engn, Nanjing 210003, Jiangsu, Peoples R China.
    A Contactless Measuring Method of Skin Temperature based on the Skin Sensitivity Index and Deep Learning2019Inngår i: Applied Sciences, E-ISSN 2076-3417, Vol. 9, nr 7, artikkel-id 1375Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Featured Application The NISDL method proposed in this paper can be used for real time contactless measuring of human skin temperature, which reflects human body thermal comfort status and can be used for control HVAC devices. Abstract In human-centered intelligent building, real-time measurements of human thermal comfort play critical roles and supply feedback control signals for building heating, ventilation, and air conditioning (HVAC) systems. Due to the challenges of intra- and inter-individual differences and skin subtleness variations, there has not been any satisfactory solution for thermal comfort measurements until now. In this paper, a contactless measuring method based on a skin sensitivity index and deep learning (NISDL) was proposed to measure real-time skin temperature. A new evaluating index, named the skin sensitivity index (SSI), was defined to overcome individual differences and skin subtleness variations. To illustrate the effectiveness of SSI proposed, a two multi-layers deep learning framework (NISDL method I and II) was designed and the DenseNet201 was used for extracting features from skin images. The partly personal saturation temperature (NIPST) algorithm was use for algorithm comparisons. Another deep learning algorithm without SSI (DL) was also generated for algorithm comparisons. Finally, a total of 1.44 million image data was used for algorithm validation. The results show that 55.62% and 52.25% error values (NISDL method I, II) are scattered at (0 degrees C, 0.25 degrees C), and the same error intervals distribution of NIPST is 35.39%.

  • 331.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk analys, NA (stängd 2012-06-30).
    Session summary2005Inngår i: Robotics Research: The Eleventh International Symposium, Springer Berlin/Heidelberg, 2005, s. 57-59Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    While the current part carries the title “path planning” the contributions in this section cover two topics: mapping and planning. In some sense one might argue that intelligent (autonomous) mapping actually requires path planning. While this is correct the contributions actually have a broader scope as is outlined below. A common theme to all of the presentations in this section is the adoption of hybrid representations to facilitate efficient processing in complex environments. Purely geometric models allow for accurate estimation of position and motion generation, but they scale poorly with environmental complexity while qualitative geometric models have a limited accuracy and are well suited for global estimation of trajectories/locations. Through fusion of qualitative and quantitative models it becomes possible to develop systems that have tractable complexity while maintaining geometric accuracy.

  • 332.
    Christensen, Henrik I.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pacchierotti, Elena
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Embodied social interaction for robots2005Inngår i: AISB'05 Convention: Social Intelligence and Interaction in Animals, Robots and Agents: Proceedings of the Symposium on Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction, 2005, s. 40-45Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion of systems in close proximity to humans. It is here essential that the robot exhibits a behaviour that signals safe motion and awareness of the other actors in its environment. To facilitate this there is a need to endow the system with facilities for detection and tracking of objects in the vicinity of the platform, and to design a control law that enables motion generation which is considered socially acceptable. We present a system for in-door navigation in which the rules of proxemics are used to define interaction strategies for the platform.

  • 333.
    Chrysostomou, Dimitrios
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Nalpantidis, Lazaros
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Lighting compensating multiview stereo2011Inngår i: 2011 IEEE International Conference on Imaging Systems and Techniques, IST 2011 - Proceedings, 2011, s. 176-179Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, a method that performs 3D object reconstruction from multiple views of the same scene is presented. This reconstruction method initially produces a basic model, based on the space carving algorithm, that is further refined in a subsequent step. The algorithm is fast, computationally simple and produces accurate representations of the input scenes. In addition, compared to previously presented works the proposed algorithm is able to cope with non uniformly lighted scenes due to the characteristics of the used voxel dissimilarity measure. The proposed algorithm is assessed and the experimental results are presented and discussed.

  • 334.
    Chung, Michael Jae-Yoon
    et al.
    University of Washington, Seattle.
    Pronobis, Andrzej
    University of Washington, Seattle.
    Cakmak, Maya
    University of Washington, Seattle.
    Fox, Dieter
    University of Washington, Seattle.
    Rao, Rajesh P. N.
    University of Washington, Seattle.
    Autonomous Question Answering with Mobile Robots in Human-Populated Environments2016Inngår i: Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’16), IEEE, 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Autonomous mobile robots will soon become ubiquitous in human-populated environments. Besides their typical applications in fetching, delivery, or escorting, such robots present the opportunity to assist human users in their daily tasks by gathering and reporting up-to-date knowledge about the environment. In this paper, we explore this use case and present an end-to-end framework that enables a mobile robot to answer natural language questions about the state of a large-scale, dynamic environment asked by the inhabitants of that environment. The system parses the question and estimates an initial viewpoint that is likely to contain information for answering the question based on prior environment knowledge. Then, it autonomously navigates towards the viewpoint while dynamically adapting to changes and new information. The output of the system is an image of the most relevant part of the environment that allows the user to obtain an answer to their question. We additionally demonstrate the benefits of a continuously operating information gathering robot by showing how the system can answer retrospective questions about the past state of the world using incidentally recorded sensory data. We evaluate our approach with a custom mobile robot deployed in a university building, with questions collected from occupants of the building. We demonstrate our system's ability to respond to these questions in different environmental conditions.

  • 335.
    Chung, Michael Jae-Yoon
    et al.
    University of Washington, Seattle.
    Pronobis, Andrzej
    University of Washington, Seattle.
    Cakmak, Maya
    University of Washington, Seattle.
    Fox, Dieter
    University of Washington, Seattle.
    Rao, Rajesh P. N.
    University of Washington, Seattle.
    Designing Information Gathering Robots for Human-Populated Environments2015Inngår i: Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’15), IEEE, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Advances in mobile robotics have enabled robots that can autonomously operate in human-populated environments. Although primary tasks for such robots might be fetching, delivery, or escorting, they present an untapped potential as information gathering agents that can answer questions for the community of co-inhabitants. In this paper, we seek to better understand requirements for such information gathering robots (InfoBots) from the perspective of the user requesting the information. We present findings from two studies: (i) a user survey conducted in two office buildings and (ii) a 4-day long deployment in one of the buildings, during which inhabitants of the building could ask questions to an InfoBot through a web-based interface. These studies allow us to characterize the types of information that InfoBots can provide for their users.

  • 336.
    Chung, Michael Jae-Yoon
    et al.
    University of Washington, Seattle.
    Pronobis, Andrzej
    University of Washington, Seattle.
    Cakmak, Maya
    University of Washington, Seattle.
    Fox, Dieter
    University of Washington, Seattle.
    Rao, Rajesh P. N.
    University of Washington, Seattle.
    Exploring the Potential of Information Gathering Robots2015Inngår i: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (HRI’15), ACM Digital Library, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Autonomous mobile robots equipped with a number of sensors will soon be ubiquitous in human populated environments. In this paper we present an initial exploration into the potential of using such robots for information gathering. We present findings from a formative user survey and a 4-day long Wizard-of-Oz deployment of a robot that answers questions such as "Is there free food on the kitchen table?" Our studies allow us to characterize the types of information that InfoBots might be most useful for.

  • 337.
    Chunming, Tang
    et al.
    Harbin Engineering University, China.
    Bengtsson, Ewert
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Automatic Tracking of Neural Stem Cells2005Inngår i: WDIC 2005: Workshop Proceedings, 2005, s. 61-66Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In order to understand the development of stem-cells into specialized mature cells it is necessary to study the growth of cells in culture. For this purpose it is very useful to have an efficient computerized cell tracking system. In this paper a prototype system for tracking neural stem cells in a sequence of images is described. The system is automatic as far as possible but in order to get as complete and correct tracking results as possible the user can interactively verify and correct the crucial starting segmentation of the first frame and inspect the final result and correct errors if nec-

    essary. All cells are classified into inactive, active, dividing and clustered cells. Different algorithms are used to deal with the different cell categories. A special backtracking step is used to automatically correct for some common errors that appear in the initial forward tracking process.

  • 338.
    Claesson, Kenji
    Umeå universitet, Teknisk-naturvetenskaplig fakultet, Fysik.
    Implementation and Validation of Independent Vector Analysis2010Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This Master’s Thesis was part of the project called Multimodalanalysis at the Depart-ment of Biomedical Engineering and Informatics at the Ume˚ University Hospital inUme˚ Sweden. The aim of the project is to develop multivariate measurement anda,analysis methods of the skeletal muscle physiology. One of the methods used to scanthe muscle is functional ultrasound. In a study performed by the project group datawas aquired, where test subjects were instructed to follow a certain exercise scheme,which was measured. Since there currently is no superior method to analyze the result-ing data (in form of ultrasound video sequences) several methods are being looked at.One considered method is called Independent Vector Analysis (IVA). IVA is a statisticalmethod to find independent components in a mix of components. This Master’s Thesisis about segmenting and analyzing the ultrasound images with help of IVA, to validateif it is a suitable method for this kind of tasks.First the algorithm was tested on generated mixed data to find out how well itperformed. The results were very accurate, considering that the method only usesapproximations. Some expected variation from the true value occured though.When the algorithm was considered performing to satisfactory, it was tested on thedata gathered by the study and the result can very well reflect an approximation of truesolution, since the resulting segmented signals seem to move in a possible way. But themethod has weak sides (which have been tried to be minimized) and all error analysishas been done by human eye, which definitly is a week point. But for the time being itis more important to analyze trends in the signals, rather than analyze exact numbers.So as long as the signals behave in a realistic way the result can not be said to becompletley wrong. So the overall results of the method were deemed adequate for the application at hand.

  • 339.
    Clarke, Emily L.
    et al.
    Univ Leeds, England; Leeds Teaching Hosp NHS Trust, England.
    Revie, Craig
    FFEI Ltd, England.
    Brettle, David
    Leeds Teaching Hosp NHS Trust, England.
    Shires, Michael
    Univ Leeds, England.
    Jackson, Peter
    Leeds Teaching Hosp NHS Trust, England.
    Cochrane, Ravinder
    FFEI Ltd, England.
    Wilson, Robert
    FFEI Ltd, England.
    Mello-Thoms, Claudia
    Univ Sydney, Australia.
    Treanor, Darren
    Linköpings universitet, Institutionen för klinisk och experimentell medicin, Avdelningen för neuro- och inflammationsvetenskap. Linköpings universitet, Medicinska fakulteten. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Region Östergötland, Diagnostikcentrum, Klinisk patologi. Univ Leeds, England; Leeds Teaching Hosp NHS Trust, England.
    Development of a novel tissue-mimicking color calibration slide for digital microscopy2018Inngår i: Color Research and Application, ISSN 0361-2317, E-ISSN 1520-6378, Vol. 43, nr 2, s. 184-197Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Digital microscopy produces high resolution digital images of pathology slides. Because no acceptable and effective control of color reproduction exists in this domain, there is significant variability in color reproduction of whole slide images. Guidance from international bodies and regulators highlights the need for color standardization. To address this issue, we systematically measured and analyzed the spectra of histopathological stains. This information was used to design a unique color calibration slide utilizing real stains and a tissue-like substrate, which can be stained to produce the same spectral response as tissue. By closely mimicking the colors in stained tissue, our target can provide more accurate color representation than film-based targets, whilst avoiding the known limitations of using actual tissue. The application of the color calibration slide in the clinical setting was assessed by conducting a pilot user-evaluation experiment with promising results. With the imminent integration of digital pathology into the routine work of the diagnostic pathologist, it is hoped that this color calibration slide will help provide a universal color standard for digital microscopy thereby ensuring better and safer healthcare delivery.

  • 340.
    Clement, Alice M.
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för organismbiologi, Evolution och utvecklingsbiologi.
    Nysjö, Johan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Strand, Robin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Ahlberg, Per E.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för organismbiologi, Evolution och utvecklingsbiologi.
    Brain – Endocast relationship in the Australian lungfish, Neoceratodus forsteri, elucidated from tomographic data (Sarcopterygii: Dipnoi)2015Inngår i: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, nr 10, artikkel-id e0141277Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Although the brains of the three extant lungfish genera have been previously described, the spatial relationship between the brain and the neurocranium has never before been fully described nor quantified. Through the application of virtual microtomography (mu CT) and 3D rendering software, we describe aspects of the gross anatomy of the brain and labyrinth region in the Australian lungfish, Neoceratodus forsteri and compare this to previous accounts. Unexpected characters in this specimen include short olfactory peduncles connecting the olfactory bulbs to the telencephalon, and an oblong telencephalon. Furthermore, we illustrate the endocast (the mould of the internal space of the neurocranial cavity) of Neoceratodus, also describing and quantifying the brain-endocast relationship in a lungfish for the first time. Overall, the brain of the Australian lungfish closely matches the size and shape of the endocast cavity housing it, filling more than four fifths of the total volume. The forebrain and labyrinth regions of the brain correspond very well to the endocast morphology, while the midbrain and hindbrain do not fit so closely. Our results cast light on the gross neural and endocast anatomy in lungfishes, and are likely to have particular significance for palaeoneurologists studying fossil taxa.

  • 341.
    Clement, Alice M.
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för organismbiologi, Evolution och utvecklingsbiologi.
    Strand, Robin
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Nysjö, Johan
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Long, John A.
    Ahlberg, Per E.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Biologiska sektionen, Institutionen för organismbiologi, Evolution och utvecklingsbiologi.
    A new method for reconstructing brain morphology: Applying the brain-neurocranial spatial relationship in an extant lungfish to a fossil endocast2016Inngår i: Royal Society Open Science, E-ISSN 2054-5703, Vol. 3, nr 7, artikkel-id 160307Artikkel i tidsskrift (Fagfellevurdert)
  • 342.
    Coeurjolly, David
    et al.
    Uppsala universitet, Fakultetsövergripande enheter, Centrum för bildanalys. Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Svensson, Stina
    Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Estimation of Curvature along Curves with Application to Fibres in 3D Images of Paper2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Space curves can be used to represent elongated objects in 3D images and furthermore to facilitate the computation of shape measures for the represented objects. In our specific application (fibres in 3D images of paper), we want to analyze the fibre net

  • 343.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Murray, R. M.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Synthesis of correct-by-construction behavior trees2017Inngår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 6039-6046, artikkel-id 8206502Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we study the problem of synthesizing correct-by-construction Behavior Trees (BTs) controlling agents in adversarial environments. The proposed approach combines the modularity and reactivity of BTs with the formal guarantees of Linear Temporal Logic (LTL) methods. Given a set of admissible environment specifications, an agent model in form of a Finite Transition System and the desired task in form of an LTL formula, we synthesize a BT in polynomial time, that is guaranteed to correctly execute the desired task. To illustrate the approach, we present three examples of increasing complexity.

  • 344.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    How Behavior Trees Modularize Robustness and Safety in Hybrid Systems2014Inngår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, s. 1482-1488Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Behavior Trees (BTs) have become a popular framework for designing controllers of in-game opponents in the computer gaming industry. In this paper, we formalize and analyze the reasons behind the success of the BTs using standard tools of robot control theory, focusing on how properties such as robustness and safety are addressed in a modular way. In particular, we show how these key properties can be traced back to the ideas of subsumption and sequential compositions of robot behaviors. Thus BTs can be seen as a recent addition to a long research effort towards increasing modularity, robustness and safety of robot control software. To illustrate the use of BTs, we provide a set of solutions to example problems.

  • 345.
    Conrad, Christian
    et al.
    Goethe University, Germany.
    Mester, Rudolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Goethe University, Germany.
    LEARNING RANK REDUCED MAPPINGS USING CANONICAL CORRELATION ANALYSIS2016Inngår i: 2016 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), IEEE , 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Correspondence relations between different views of the same scene can be learnt in an unsupervised manner. We address autonomous learning of arbitrary fixed spatial (point-to-point) mappings. Since any such transformation can be represented by a permutation matrix, the signal model is a linear one, whereas the proposed analysis method, mainly based on Canonical Correlation Analysis (CCA) is based on a generalized eigensystem problem, i.e., a nonlinear operation. The learnt transformation is represented implicitly in terms of pairs of learned basis vectors and does neither use nor require an analytic/parametric expression for the latent mapping. We show how the rank of the signal that is shared among views may be determined from canonical correlations and how the overlapping (=shared) dimensions among the views may be inferred.

  • 346.
    Conrad, Christian
    et al.
    Goethe University, Germany.
    Mester, Rudolf
    Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Goethe University, Germany.
    Learning Relative Photometric Differences of Pairs of Cameras2015Inngår i: 2015 12TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), IEEE , 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an approach to learn relative photometric differences between pairs of cameras, which have partially overlapping fields of views. This is an important problem, especially in appearance based methods to correspondence estimation or object identification in multi-camera systems where grey values observed by different cameras are processed. We model intensity differences among pairs of cameras by means of a low order polynomial (Gray Value Transfer Function - GVTF) which represents the characteristic curve of the mapping of grey values, s(i) produced by camera C-i to the corresponding grey values s(j) acquired with camera C-j. While the estimation of the GVTF parameters is straightforward once a set of truly corresponding pairs of grey values is available, the non trivial task in the GVTF estimation process solved in this paper is the extraction of corresponding grey value pairs in the presence of geometric and photometric errors. We also present a temporal GVTF update scheme to adapt to gradual global illumination changes, e.g., due to the change of daylight.

  • 347.
    Cooney, Martin
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Berck, Peter
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Designing a Robot Which Paints With a Human: Visual Metaphors to Convey Contingency and Artistry2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Socially assistive robots could contribute to fulfilling an important need for interaction in contexts where human caregivers are scarce–such as art therapy, where peers, or patients and therapists, can make art together. However, current art-making robots typically generate art either by themselves, or as tools under the control of a human artist; how to make art together with a human in a good way has not yet received much attention, possibly because some concepts related to art, such as emotion and creativity, are not yet well understood. The current work reports on our use of a collaborative prototyping approach to explore this concept of a robot which can paint together with people. The result is a proposed design, based on an idea of using visual metaphors to convey contingency and artistry. Our aim is that the identified considerations will help support next steps, toward supporting positive experiences for people through art-making with a robot.

  • 348.
    Cooney, Martin
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    PastVision+: Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips2017Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 4, artikkel-id 61Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human-object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.

  • 349.
    Cooney, Martin
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Karlsson, Stefan M.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Impressions of Size-Changing in a Companion Robot2015Inngår i: PhyCS 2015 – 2nd International Conference on Physiological Computing Systems, Proceedings / [ed] Hugo Plácido da Silva, Pierre Chauvet, Andreas Holzinger, Stephen Fairclough & Dennis Majoe, SciTePress, 2015, s. 118-123Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Physiological data such as head movements can be used to intuitively control a companion robot to perform useful tasks. We believe that some tasks such as reaching for high objects or getting out of a person’s way could be accomplished via size changes, but such motions should not seem threatening or bothersome. To gain insight into how size changes are perceived, the Think Aloud Method was used to gather typical impressions of a new robotic prototype which can expand in height or width based on a user’s head movements. The results indicate promise for such systems, also highlighting some potential pitfalls.

  • 350.
    Cooney, Martin
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Leister, Wolfgang
    Norsk Regnesentral, Oslo, Norway.
    Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students2019Inngår i: Robotics, E-ISSN 2218-6581, Vol. 8, nr 1, artikkel-id 21Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We report on an exploratory study conducted at a graduate school in Sweden with a humanoid robot, Baxter. First, we describe a list of potentially useful capabilities for a robot teaching assistant derived from brainstorming and interviews with faculty members, teachers, and students. These capabilities consist of reading educational materials out loud, greeting, alerting, allowing remote operation, providing clarifications, and moving to carry out physical tasks. Secondly, we present feedback on how the robot's capabilities, demonstrated in part with the Wizard of Oz approach, were perceived, and iteratively adapted over the course of several lectures, using the EngagementProfile tool. Thirdly, we discuss observations regarding the capabilities and the development process. Our findings suggest that using a social robot as a teachingassistant is promising using the chosen capabilities and Engagement Profile tool. We find that enhancing the robot's autonomous capabilities and further investigating the role of embodiment are some important topics to be considered in future work. © 2019 by the authors.

45678910 301 - 350 of 1863
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf