Digitala Vetenskapliga Arkivet

Change search
Refine search result
1234567 1 - 50 of 3020
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abbeloos, W.
    et al.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ataer-Cansizoglu, E.
    Taguchi, Y.
    Feng, C.
    Lee, T. -Y
    Detecting and Grouping Identical Objects for Region Proposal and Classification2017In: 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, 2017, Vol. 2017, p. 501-502, article id 8014810Conference paper (Refereed)
    Abstract [en]

    Often multiple instances of an object occur in the same scene, for example in a warehouse. Unsupervised multi-instance object discovery algorithms are able to detect and identify such objects. We use such an algorithm to provide object proposals to a convolutional neural network (CNN) based classifier. This results in fewer regions to evaluate, compared to traditional region proposal algorithms. Additionally, it enables using the joint probability of multiple instances of an object, resulting in improved classification accuracy. The proposed technique can also split a single class into multiple sub-classes corresponding to the different object types, enabling hierarchical classification.

  • 2.
    Abedin, Md Reaz Ashraful
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Self-supervised language grounding by active sensing combined with Internet acquired images and text2017In: Proceedings of the Fourth International Workshop on Recognition and Action for Scene Understanding (REACTS2017) / [ed] Jorge Dias George Azzopardi, Rebeca Marf, Málaga: REACTS , 2017, p. 71-83Conference paper (Refereed)
    Abstract [en]

    For natural and efficient verbal communication between a robot and humans, the robot should be able to learn names and appearances of new objects it encounters. In this paper we present a solution combining active sensing of images with text based and image based search on the Internet. The approach allows the robot to learn both object name and how to recognise similar objects in the future, all self-supervised without human assistance. One part of the solution is a novel iterative method to determine the object name using image classi- fication, acquisition of images from additional viewpoints, and Internet search. In this paper, the algorithmic part of the proposed solution is presented together with evaluations using manually acquired camera images, while Internet data was acquired through direct and reverse image search with Google, Bing, and Yandex. Classification with multi-classSVM and with five different features settings were evaluated. With five object classes, the best performing classifier used a combination of Pyramid of Histogram of Visual Words (PHOW) and Pyramid of Histogram of Oriented Gradient (PHOG) features, and reached a precision of 80% and a recall of 78%.

    Download full text (pdf)
    fulltext
  • 3. Abela, D
    et al.
    Ritchie, H
    Ababneh, D
    Gavin, C
    Nilsson, Mats F
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Pharmacy, Department of Pharmaceutical Biosciences.
    Niazi, M Khalid Khan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Carlsson, K
    Webster, WS
    The effect of drugs with ion channel-blocking activity on the early embryonic rat heart2010In: Birth defects research. Part B. Developmental and reproductice toxicology, ISSN 1542-9733, E-ISSN 1542-9741, Vol. 89, no 5, p. 429-440Article in journal (Refereed)
    Abstract [en]

    This study investigated the effects of a range of pharmaceutical drugs with ion channel-blocking activity on the heart of gestation day 13 rat embryos in vitro. The general hypothesis was that the blockade of the IKr/hERG channel, that is highly important for the normal functioning of the embryonic rat heart, would cause bradycardia and arrhythmia. Concomitant blockade of other channels was expected to modify the effects of hERG blockade. Fourteen drugs with varying degrees of specificity and affinity toward potassium, sodium, and calcium channels were tested over a range of concentrations. The rat embryos were maintained for 2 hr in culture, 1 hr to acclimatize, and 1 hr to test the effect of the drug. All the drugs caused a concentration-dependent bradycardia except nifedipine, which primarily caused a negative inotropic effect eventually stopping the heart. A number of drugs induced arrhythmias and these appeared to be related to either sodium channel blockade, which resulted in a double atrial beat for each ventricular beat, or IKr/hERG blockade, which caused irregular atrial and ventricular beats. However, it is difficult to make a precise prediction of the effect of a drug on the embryonic heart just by looking at the polypharmacological action on ion channels. The results indicate that the use of the tested drugs during pregnancy could potentially damage the embryo by causing periods of hypoxia. In general, the effects on the embryonic heart were only seen at concentrations greater than those likely to occur with normal therapeutic dosing.

  • 4.
    Abels, Esther
    et al.
    PathAI, MA USA.
    Pantanowitz, Liron
    Univ Pittsburgh, PA USA.
    Aeffner, Famke
    Amgen Inc, CA USA.
    Zarella, Mark D.
    Drexel Univ, PA 19104 USA.
    van der Laak, Jeroen
    Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Diagnostics, Clinical pathology. Linköping University, Center for Medical Image Science and Visualization (CMIV). Radboud Univ Nijmegen, Netherlands.
    Bui, Marilyn M.
    H Lee Moffitt Canc Ctr and Res Inst, FL USA.
    Vemuri, Venkata N. P.
    Chan Zuckerberg Biohub, CA USA.
    Parwani, Anil V.
    Ohio State Univ, OH 43210 USA.
    Gibbs, Jeff
    Hyman Phelps and McNamara PC, DC USA.
    Agosto-Arroyo, Emmanuel
    H Lee Moffitt Canc Ctr and Res Inst, FL USA.
    Beck, Andrew H.
    PathAI, MA USA.
    Kozlowski, Cleopatra
    Genentech Inc, CA 94080 USA.
    Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association2019In: Journal of Pathology, ISSN 0022-3417, E-ISSN 1096-9896, Vol. 249, no 3, p. 286-294Article, review/survey (Refereed)
    Abstract [en]

    In this white paper, experts from the Digital Pathology Association (DPA) define terminology and concepts in the emerging field of computational pathology, with a focus on its application to histology images analyzed together with their associated patient data to extract information. This review offers a historical perspective and describes the potential clinical benefits from research and applications in this field, as well as significant obstacles to adoption. Best practices for implementing computational pathology workflows are presented. These include infrastructure considerations, acquisition of training data, quality assessments, as well as regulatory, ethical, and cyber-security concerns. Recommendations are provided for regulators, vendors, and computational pathology practitioners in order to facilitate progress in the field. (c) 2019 The Authors. The Journal of Pathology published by John Wiley amp; Sons Ltd on behalf of Pathological Society of Great Britain and Ireland.

    Download full text (pdf)
    fulltext
  • 5. Abeywardena, D.
    et al.
    Wang, Zhan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dissanayake, G.
    Waslander, S. L.
    Kodagoda, S.
    Model-aided state estimation for quadrotor micro air vehicles amidst wind disturbances2014Conference paper (Refereed)
    Abstract [en]

    This paper extends the recently developed Model-Aided Visual-Inertial Fusion (MA-VIF) technique for quadrotor Micro Air Vehicles (MAV) to deal with wind disturbances. The wind effects are explicitly modelled in the quadrotor dynamic equations excluding the unobservable wind velocity component. This is achieved by a nonlinear observability of the dynamic system with wind effects. We show that using the developed model, the vehicle pose and two components of the wind velocity vector can be simultaneously estimated with a monocular camera and an inertial measurement unit. We also show that the MA-VIF is reasonably tolerant to wind disturbances, even without explicit modelling of wind effects and explain the reasons for this behaviour. Experimental results using a Vicon motion capture system are presented to demonstrate the effectiveness of the proposed method and validate our claims.

  • 6.
    Abraham, Johannes
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Romano, Robin
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Automatisk kvalitetssäkring av information för järnvägsanläggningar: Automatic quality assurance of information for railway infrastructure2019Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    With increased expectations for the expansion of the future railway, this entails an increased load on the current railway network. The result of the expansion can be an increasing number of cancellations and delays. By taking advantage of technological innovations such as digitalization and automation, the existing system and work  processes can be developed for more efficient management.   The Swedish Transport Administration sets requirements for Building Information Modeling (BIM) in procurements. The planning of signal installations within the railway takes place in Sweco using the CAD program Promis.e. From the program, lists containing the information of the objects (BIS-lists) can be retrieved. The  Swedish Transport Administration requires that the attributes must consist of a  certain format or have specific values. In this thesis project, methods for automatic quality assurance of infrastructure information and the implementation of the method for rail projects were examined. The investigated methods include the  calculation program Excel, the query programming language SQL and the process of ETL.  After analyzing the methods, the ETL process was chosen. The result was that a  program was created to automatically select the type of BIS list that would be  reviewed and to verify that the examined attributes contained allowed values. In  order to investigate whether the cost of the programs would benefit the company in addition to the quality assurance, an economic analysis was carried out. According to the calculations, the choice of method could also be justified from an economic  perspective.

    Download full text (pdf)
    Examensarbete
  • 7.
    Abramian, David
    et al.
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Faculty of Science & Engineering.
    Eklund, Anders
    Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    REFACING: RECONSTRUCTING ANONYMIZED FACIAL FEATURES USING GANS2019In: 2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), IEEE , 2019, p. 1104-1108Conference paper (Refereed)
    Abstract [en]

    Anonymization of medical images is necessary for protecting the identity of the test subjects, and is therefore an essential step in data sharing. However, recent developments in deep learning may raise the bar on the amount of distortion that needs to be applied to guarantee anonymity. To test such possibilities, we have applied the novel CycleGAN unsupervised image-to-image translation framework on sagittal slices of T1 MR images, in order to reconstruct, facial features from anonymized data. We applied the CycleGAN framework on both face-blurred and face-removed images. Our results show that face blurring may not provide adequate protection against malicious attempts at identifying the subjects, while face removal provides more robust anonymization, but is still partially reversible.

    Download full text (pdf)
    fulltext
  • 8.
    Abrate, Matteo
    et al.
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Bacciu, Clara
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Hast, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Marchetti, Andrea
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Minutoli, Salvatore
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Tesconi, Maurizio
    CNR Natl Res Council, Inst Informat & Telemat, I-56124 Pisa, Italy.
    Geomemories - A Platform for Visualizing Historical, Environmental and Geospatial Changes of the Italian Landscape2013In: ISPRS International Journal of Geo-Information. Special issue: Geospatial Monitoring and Modelling of Environmental Change, ISSN 2220-9964, Vol. 2, no 2, p. 432-455Article in journal (Refereed)
    Abstract [en]

    The GeoMemories project aims at publishing on the Web and digitally preserving historical aerial photographs that are currently stored in physical form within the archives of the Aerofototeca Nazionale in Rome. We describe a system, available at http://www.geomemories.org, that lets users visualize the evolution of the Italian landscape throughout the last century. The Web portal allows comparison of recent satellite imagery with several layers of historical maps, obtained from the aerial photos through a complex workflow that merges them together. We present several case studies carried out in collaboration with geologists, historians and archaeologists, that illustrate the great potential of our system in different research fields. Experiments and advances in image processing technologies are envisaged as a key factor in solving the inherent issue of vast amounts of manual work, from georeferencing to mosaicking to analysis.

  • 9. Adinugroho, Sigit
    et al.
    Vallot, Dorothée
    Uppsala University, Disciplinary Domain of Science and Technology, Earth Sciences, Department of Earth Sciences, LUVAL.
    Westrin, Pontus
    Uppsala University, Disciplinary Domain of Science and Technology, Earth Sciences, Department of Earth Sciences.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Calving events detection and quantification from time-lapse images in Tunabreen glacier2015In: Proc. 9th International Conference on Information & Communication Technology and Systems, Piscataway, NJ: IEEE , 2015, p. 61-65Conference paper (Refereed)
  • 10.
    Adler, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Learned Iterative Reconstruction2023In: Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision, Springer Nature , 2023, p. 751-771Chapter in book (Other academic)
    Abstract [en]

    Learned iterative reconstruction methods have recently emerged as a powerful tool to solve inverse problems. These deep learning techniques for image reconstruction achieve remarkable speed and accuracy by combining hard knowledge about the physics of the image formation process, represented by the forward operator, with soft knowledge about how the reconstructions should look like, represented by deep neural networks. A diverse set of such methods have been proposed, and this chapter seeks to give an overview of their similarities and differences, as well as discussing some of the commonly used methods to improve their performance.

  • 11.
    Adler, Jonas
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.). Elekta Instrument AB, Stockholm, Sweden.
    Öktem, Ozan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Learned Primal-Dual Reconstruction2018In: IEEE Transactions on Medical Imaging, ISSN 0278-0062, E-ISSN 1558-254X, Vol. 37, no 6, p. 1322-1332Article in journal (Refereed)
    Abstract [en]

    We propose the Learned Primal-Dual algorithm for tomographic reconstruction. The algorithm accounts for a (possibly non-linear) forward operator in a deep neural network by unrolling a proximal primal-dual optimization method, but where the proximal operators have been replaced with convolutional neural networks. The algorithm is trained end-to-end, working directly from raw measured data and it does not depend on any initial reconstruction such as filtered back-projection (FBP). We compare performance of the proposed method on low dose computed tomography reconstruction against FBP, total variation (TV), and deep learning based post-processing of FBP. For the Shepp-Logan phantom we obtain >6 dB peak signal to noise ratio improvement against all compared methods. For human phantoms the corresponding improvement is 6.6 dB over TV and 2.2 dB over learned post-processing along with a substantial improvement in the structural similarity index. Finally, our algorithm involves only ten forward-back-projection computations, making the method feasible for time critical clinical applications.

  • 12.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Castellano-Quero, Manuel
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl: Introspection for robust radar and lidar perception in diverse environments using differential entropy2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 155, article id 104136Article in journal (Refereed)
    Abstract [en]

    Robust perception is an essential component to enable long-term operation of mobile robots. It depends on failure resilience through reliable sensor data and pre-processing, as well as failure awareness through introspection, for example the ability to self-assess localization performance. This paper presents CorAl: a principled, intuitive, and generalizable method to measure the quality of alignment between pairs of point clouds, which learns to detect alignment errors in a self-supervised manner. CorAl compares the differential entropy in the point clouds separately with the entropy in their union to account for entropy inherent to the scene. By making use of dual entropy measurements, we obtain a quality metric that is highly sensitive to small alignment errors and still generalizes well to unseen environments. In this work, we extend our previous work on lidar-only CorAl to radar data by proposing a two-step filtering technique that produces high-quality point clouds from noisy radar scans. Thus, we target robust perception in two ways: by introducing a method that introspectively assesses alignment quality, and by applying it to an inherently robust sensor modality. We show that our filtering technique combined with CorAl can be applied to the problem of alignment classification, and that it detects small alignment errors in urban settings with up to 98% accuracy, and with up to 96% if trained only in a different environment. Our lidar and radar experiments demonstrate that CorAl outperforms previous methods both on the ETH lidar benchmark, which includes several indoor and outdoor environments, and the large-scale Oxford and MulRan radar data sets for urban traffic scenarios. The results also demonstrate that CorAl generalizes very well across substantially different environments without the need of retraining.

  • 13.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Karlsson, Mattias
    MRO Lab of the AASS Research Centre, Örebro University, Örebro, Sweden.
    Kubelka, Vladimír
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    TBV Radar SLAM - Trust but Verify Loop Candidates2023In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 6, p. 3613-3620Article in journal (Refereed)
    Abstract [en]

    Robust SLAM in large-scale environments requires fault resilience and awareness at multiple stages, from sensing and odometry estimation to loop closure. In this work, we present TBV (Trust But Verify) Radar SLAM, a method for radar SLAM that introspectively verifies loop closure candidates. TBV Radar SLAM achieves a high correct-loop-retrieval rate by combining multiple place-recognition techniques: tightly coupled place similarity and odometry uncertainty search, creating loop descriptors from origin-shifted scans, and delaying loop selection until after verification. Robustness to false constraints is achieved by carefully verifying and selecting the most likely ones from multiple loop constraints. Importantly, the verification and selection are carried out after registration when additional sources of loop evidence can easily be computed. We integrate our loop retrieval and verification method with a robust odometry pipeline within a pose graph framework. By evaluation on public benchmarks we found that TBV Radar SLAM achieves 65% lower error than the previous state of the art. We also show that it generalizes across environments without needing to change any parameters. We provide the open-source implementation at https://github.com/dan11003/tbv_slam_public

    The full text will be freely available from 2025-06-01 00:00
  • 14.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CFEAR Radarodometry - Conservative Filtering for Efficient and Accurate Radar Odometry2021In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), IEEE, 2021, p. 5462-5469Conference paper (Refereed)
    Abstract [en]

    This paper presents the accurate, highly efficient, and learning-free method CFEAR Radarodometry for large-scale radar odometry estimation. By using a filtering technique that keeps the k strongest returns per azimuth and by additionally filtering the radar data in Cartesian space, we are able to compute a sparse set of oriented surface points for efficient and accurate scan matching. Registration is carried out by minimizing a point-to-line metric and robustness to outliers is achieved using a Huber loss. We were able to additionally reduce drift by jointly registering the latest scan to a history of keyframes and found that our odometry method generalizes to different sensor models and datasets without changing a single parameter. We evaluate our method in three widely different environments and demonstrate an improvement over spatially cross-validated state-of-the-art with an overall translation error of 1.76% in a public urban radar odometry benchmark, running at 55Hz merely on a single laptop CPU thread.

    Download full text (pdf)
    CFEAR Radarodometry - Conservative Filtering for Efficient and Accurate Radar Odometry
  • 15.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Alhashimi, Anas
    Örebro University, Örebro, Sweden; Computer Engineering Department, University of Baghdad, Baghdad, Iraq.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    Lidar-Level Localization With Radar? The CFEAR Approach to Accurate, Fast, and Robust Large-Scale Radar Odometry in Diverse Environments2023In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 39, no 2, p. 1476-1495Article in journal (Refereed)
    Abstract [en]

    This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.

    Download full text (pdf)
    Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments
  • 16.
    Adolfsson, Daniel
    et al.
    Örebro University, School of Science and Technology.
    Magnusson, Martin
    Örebro University, School of Science and Technology.
    Liao, Qianfang
    Örebro University, School of Science and Technology.
    Lilienthal, Achim
    Örebro University, School of Science and Technology.
    Andreasson, Henrik
    Örebro University, School of Science and Technology.
    CorAl – Are the point clouds Correctly Aligned?2021In: 10th European Conference on Mobile Robots (ECMR 2021), IEEE, 2021, Vol. 10Conference paper (Refereed)
    Abstract [en]

    In robotics perception, numerous tasks rely on point cloud registration. However, currently there is no method that can automatically detect misaligned point clouds reliably and without environment-specific parameters. We propose "CorAl", an alignment quality measure and alignment classifier for point cloud pairs, which facilitates the ability to introspectively assess the performance of registration. CorAl compares the joint and the separate entropy of the two point clouds. The separate entropy provides a measure of the entropy that can be expected to be inherent to the environment. The joint entropy should therefore not be substantially higher if the point clouds are properly aligned. Computing the expected entropy makes the method sensitive also to small alignment errors, which are particularly hard to detect, and applicable in a range of different environments. We found that CorAl is able to detect small alignment errors in previously unseen environments with an accuracy of 95% and achieve a substantial improvement to previous methods.

    Download full text (pdf)
    CorAl – Are the point clouds Correctly Aligned?
  • 17.
    Aghazadeh, Omid
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Data Driven Visual Recognition2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is mostly about supervised visual recognition problems. Based on a general definition of categories, the contents are divided into two parts: one which models categories and one which is not category based. We are interested in data driven solutions for both kinds of problems.

    In the category-free part, we study novelty detection in temporal and spatial domains as a category-free recognition problem. Using data driven models, we demonstrate that based on a few reference exemplars, our methods are able to detect novelties in ego-motions of people, and changes in the static environments surrounding them.

    In the category level part, we study object recognition. We consider both object category classification and localization, and propose scalable data driven approaches for both problems. A mixture of parametric classifiers, initialized with a sophisticated clustering of the training data, is demonstrated to adapt to the data better than various baselines such as the same model initialized with less subtly designed procedures. A nonparametric large margin classifier is introduced and demonstrated to have a multitude of advantages in comparison to its competitors: better training and testing time costs, the ability to make use of indefinite/invariant and deformable similarity measures, and adaptive complexity are the main features of the proposed model.

    We also propose a rather realistic model of recognition problems, which quantifies the interplay between representations, classifiers, and recognition performances. Based on data-describing measures which are aggregates of pairwise similarities of the training data, our model characterizes and describes the distributions of training exemplars. The measures are shown to capture many aspects of the difficulty of categorization problems and correlate significantly to the observed recognition performances. Utilizing these measures, the model predicts the performance of particular classifiers on distributions similar to the training data. These predictions, when compared to the test performance of the classifiers on the test sets, are reasonably accurate.

    We discuss various aspects of visual recognition problems: what is the interplay between representations and classification tasks, how can different models better adapt to the training data, etc. We describe and analyze the aforementioned methods that are designed to tackle different visual recognition problems, but share one common characteristic: being data driven.

    Download full text (pdf)
    Thesis
  • 18.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Azizpour, Hossein
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mixture component identification and learning for visual recognition2012In: Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI, Springer, 2012, p. 115-128Conference paper (Refereed)
    Abstract [en]

    The non-linear decision boundary between object and background classes - due to large intra-class variations - needs to be modelled by any classifier wishing to achieve good results. While a mixture of linear classifiers is capable of modelling this non-linearity, learning this mixture from weakly annotated data is non-trivial and is the paper's focus. Our approach is to identify the modes in the distribution of our positive examples by clustering, and to utilize this clustering in a latent SVM formulation to learn the mixture model. The clustering relies on a robust measure of visual similarity which suppresses uninformative clutter by using a novel representation based on the exemplar SVM. This subtle clustering of the data leads to learning better mixture models, as is demonstrated via extensive evaluations on Pascal VOC 2007. The final classifier, using a HOG representation of the global image patch, achieves performance comparable to the state-of-the-art while being more efficient at detection time.

  • 19.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Large Scale, Large Margin Classification using Indefinite Similarity MeasurensManuscript (preprint) (Other academic)
  • 20.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Properties of Datasets Predict the Performance of Classifiers2013Manuscript (preprint) (Other academic)
  • 21.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Properties of Datasets Predict the Performance of Classifiers2013In: BMVC 2013 - Electronic Proceedings of the British Machine Vision Conference 2013, British Machine Vision Association, BMVA , 2013Conference paper (Refereed)
    Abstract [en]

    It has been shown that the performance of classifiers depends not only on the number of training samples, but also on the quality of the training set [10, 12]. The purpose of this paper is to 1) provide quantitative measures that determine the quality of the training set and 2) provide the relation between the test performance and the proposed measures. The measures are derived from pairwise affinities between training exemplars of the positive class and they have a generative nature. We show that the performance of the state of the art methods, on the test set, can be reasonably predicted based on the values of the proposed measures on the training set. These measures open up a wide range of applications to the recognition community enabling us to analyze the behavior of the learning algorithms w.r.t the properties of the training data. This will in turn enable us to devise rules for the automatic selection of training data that maximize the quantified quality of the training set and thereby improve recognition performance.

  • 22.
    Aghazadeh, Omid
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sullivan, Josephine
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Multi view registration for novelty/background separation2012In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE Computer Society, 2012, p. 757-764Conference paper (Refereed)
    Abstract [en]

    We propose a system for the automatic segmentation of novelties from the background in scenarios where multiple images of the same environment are available e.g. obtained by wearable visual cameras. Our method finds the pixels in a query image corresponding to the underlying background environment by comparing it to reference images of the same scene. This is achieved despite the fact that all the images may have different viewpoints, significantly different illumination conditions and contain different objects cars, people, bicycles, etc. occluding the background. We estimate the probability of each pixel, in the query image, belonging to the background by computing its appearance inconsistency to the multiple reference images. We then, produce multiple segmentations of the query image using an iterated graph cuts algorithm, initializing from these estimated probabilities and consecutively combine these segmentations to come up with a final segmentation of the background. Detection of the background in turn highlights the novel pixels. We demonstrate the effectiveness of our approach on a challenging outdoors data set.

  • 23.
    Agostini, Patrick
    et al.
    Wireless Communications and Networks Department, Fraunhofer Heinrich Hertz Institute, Berlin, Germany; Technische Universitat Berlin, Germany.
    Utkovski, Zoran
    Wireless Communications and Networks Department, Fraunhofer Heinrich Hertz Institute, Berlin, Germany.
    Stańczak, Sławomir
    Wireless Communications and Networks Department, Fraunhofer Heinrich Hertz Institute, Berlin, Germany; Technische Universitat Berlin, Germany.
    Memon, Aman A.
    Communications Research Laboratory Technische Universitat Ilmenau, Germany.
    Zafar, Bilal
    Communications Research Laboratory Technische Universitat Ilmenau, Germany.
    Haardt, Martin
    Communications Research Laboratory Technische Universitat Ilmenau, Germany.
    Not-Too-Deep Channel Charting (N2D-CC)2022In: 2022 IEEE Wireless Communications and Networking Conference (WCNC), IEEE, 2022, p. 2160-2165Conference paper (Refereed)
    Abstract [en]

    Channel charting (CC) is an emerging machine learning method for learning a lower-dimensional representation of channel state information (CSI) in multi-antenna systems while simultaneously preserving spatial relations between CSI samples. The driving objective of CC is to learn these representations or channel charts in a fully unsupervised manner, i.e., without the need for having access to explicit geographical information. Based on recent findings in deep manifold learning, this paper addresses the problem of CC via the "not-too-deep" (N2D) approach for deep manifold learning. According to the proposed approach, an embedding of the global channel chart is first learned using a deep neural network (DNN)-based autoencoder (AE), and this embedding is subsequently searched for the underlying manifold using shallow clustering methods. In this way we are able to counter the problem of collapsing extremities - a well known deficiency of channel charting methods, which in previous research efforts could only be mitigated by introducing side-information in form of distance constraints. To further exploit the ever-increasing spatio-temporal CSI resolution in modern multi-antenna systems, we propose to augment the employed AE with convolutional neural network (CNN) input layers. The resulting convolutional autoencoder (CAE) architecture is able to automatically extract sparsely distributed spatio-temporal features from beamspace domain CSI, yielding a reduced computational complexity of the resulting model.

  • 24.
    Agrawal, Alekh
    et al.
    Microsoft Research.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Wu, Cathy
    Massachusetts Institute of Technology.
    et al.,
    The Second Annual Conference on Learning for Dynamics and Control: Editorial2020In: Proceedings of Machine Learning Research, ML Research Press , 2020, Vol. 120Conference paper (Refereed)
  • 25.
    Agrawal, Vikas
    et al.
    IBM Research, , India.
    Archibald, Christopher
    Mississippi State University, Starkville, United States.
    Bhatt, Mehul
    University of Bremen, Bremen, Germany.
    Bui, Hung Hai
    Laboratory for Natural Language Understanding, Sunnyvale CA, United States.
    Cook, Diane J.
    Washington State University, Pullman WA, United States.
    Cortés, Juan
    University of Toulouse, Toulouse, France.
    Geib, Christopher W.
    Drexel University, Philadelphia PA, United States.
    Gogate, Vibhav
    Department of Computer Science, University of Texas, Dallas, United States.
    Guesgen, Hans W.
    Massey University, Palmerston North, New Zealand.
    Jannach, Dietmar
    Technical university Dortmund, Dortmund, Germany.
    Johanson, Michael
    University of Alberta, Edmonton, Canada.
    Kersting, Kristian
    Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme (IAIS), Sankt Augustin, Germany; The University of Bonn, Bonn, Germany.
    Konidaris, George
    Massachusetts Institute of Technology (MIT), Cambridge MA, United States.
    Kotthoff, Lars
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Michalowski, Martin
    Adventium Labs, Minneapolis MN, United States.
    Natarajan, Sriraam
    Indiana University, Bloomington IN, United States.
    O’Sullivan, Barry
    INSIGHT Centre for Data Analytics, University College Cork, Cork, Ireland.
    Pickett, Marc
    Naval Research Laboratory, Washington DC, United States.
    Podobnik, Vedran
    Telecommunication Department of the Faculty of Electrical Engineering and Computing, University of University of Zagreb, Zagreb, Croatia.
    Poole, David
    Department of Computer Science, University of British Columbia, Vancouver, Canada.
    Shastri, Lokendra
    Infosys, , India.
    Shehu, Amarda
    George Mason University, Washington, United States.
    Sukthankar, Gita
    University of Central Florida, Orlando FL, United States.
    The AAAI-13 Conference Workshops2013In: The AI Magazine, ISSN 0738-4602, Vol. 34, no 4, p. 108-115Article in journal (Refereed)
    Abstract [en]

    The AAAI-13 Workshop Program, a part of the 27th AAAI Conference on Artificial Intelligence, was held Sunday and Monday, July 14-15, 2013, at the Hyatt Regency Bellevue Hotel in Bellevue, Washington, USA. The program included 12 workshops covering a wide range of topics in artificial intelligence, including Activity Context-Aware System Architectures (WS-13-05); Artificial Intelligence and Robotics Methods in Computational Biology (WS-13-06); Combining Constraint Solving with Mining and Learning (WS-13-07); Computer Poker and Imperfect Information (WS-13-08); Expanding the Boundaries of Health Informatics Using Artificial Intelligence (WS-13-09); Intelligent Robotic Systems (WS-13-10); Intelligent Techniques for Web Personalization and Recommendation (WS-13-11); Learning Rich Representations from Low-Level Sensors (WS-13-12); Plan, Activity,, and Intent Recognition (WS-13-13); Space, Time, and Ambient Intelligence (WS-13-14); Trading Agent Design and Analysis (WS-13-15); and Statistical Relational Artificial Intelligence (WS-13-16)

  • 26.
    Ahlberg, Carl
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Embedded high-resolution stereo-vision of high frame-rate and low latency through FPGA-acceleration2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Autonomous agents rely on information from the surrounding environment to act upon. In the array of sensors available, the image sensor is perhaps the most versatile, allowing for detection of colour, size, shape, and depth. For the latter, in a dynamic environment, assuming no a priori knowledge, stereo vision is a commonly adopted technique. How to interpret images, and extract relevant information, is referred to as computer vision. Computer vision, and specifically stereo-vision algorithms, are complex and computationally expensive, already considering a single stereo pair, with results that are, in terms of accuracy, qualitatively difficult to compare. Adding to the challenge is a continuous stream of images, of a high frame rate, and the race of ever increasing image resolutions. In the context of autonomous agents, considerations regarding real-time requirements, embedded/resource limited processing platforms, power consumption, and physical size, further add up to an unarguably challenging problem.

    This thesis aims to achieve embedded high-resolution stereo-vision of high frame-rate and low latency, by approaching the problem from two different angles, hardware and algorithmic development, in a symbiotic relationship. The first contributions of the thesis are the GIMME and GIMME2 embedded vision platforms, which offer hardware accelerated processing through FGPAs, specifically targeting stereo vision, contrary to available COTS systems at the time. The second contribution, toward stereo vision algorithms, is twofold. Firstly, the problem of scalability and the associated disparity range is addressed by proposing a segment-based stereo algorithm. In segment space, matching is independent of image scale, and similarly, disparity range is measured in terms of segments, indicating relatively few hypotheses to cover the entire range of the scene. Secondly, more in line with the conventional stereo correspondence for FPGAs, the Census Transform (CT) has been identified as a recurring cost metric. This thesis proposes an optimisation of the CT through a Genetic Algorithm (GA) - the Genetic Algorithm Census Transform (GACT). The GACT shows promising results for benchmark datasets, compared to established CT methods, while being resource efficient.

    Download full text (pdf)
    fulltext
  • 27.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Arsic, Dejan
    Munich University of Technology, Germany.
    Ganchev, Todor
    University of Patras, Greece.
    Linderhed, Anna
    FOI Swedish Defence Research Agency.
    Menezes, Paolo
    University of Coimbra, Portugal.
    Ntalampiras, Stavros
    University of Patras, Greece.
    Olma, Tadeusz
    MARAC S.A., Greece.
    Potamitis, Ilyas
    Technological Educational Institute of Crete, Greece.
    Ros, Julien
    Probayes SAS, France.
    Prometheus: Prediction and interpretation of human behaviour based on probabilistic structures and heterogeneous sensors2008Conference paper (Refereed)
    Abstract [en]

    The on-going EU funded project Prometheus (FP7-214901) aims at establishing a general framework which links fundamental sensing tasks to automated cognition processes enabling interpretation and short-term prediction of individual and collective human behaviours in unrestricted environments as well as complex human interactions. To achieve the aforementioned goals, the Prometheus consortium works on the following core scientific and technological objectives:

    1. sensor modeling and information fusion from multiple, heterogeneous perceptual modalities;

    2. modeling, localization, and tracking of multiple people;

    3. modeling, recognition, and short-term prediction of continuous complex human behavior.

    Download full text (pdf)
    fulltext
  • 28.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Berg, Amanda
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Termisk Systemteknik AB, Linköping, Sweden.
    Evaluating Template Rescaling in Short-Term Single-Object Tracking2015Conference paper (Refereed)
    Abstract [en]

    In recent years, short-term single-object tracking has emerged has a popular research topic, as it constitutes the core of more general tracking systems. Many such tracking methods are based on matching a part of the image with a template that is learnt online and represented by, for example, a correlation filter or a distribution field. In order for such a tracker to be able to not only find the position, but also the scale, of the tracked object in the next frame, some kind of scale estimation step is needed. This step is sometimes separate from the position estimation step, but is nevertheless jointly evaluated in de facto benchmarks. However, for practical as well as scientific reasons, the scale estimation step should be evaluated separately – for example,theremightincertainsituationsbeothermethodsmore suitable for the task. In this paper, we describe an evaluation method for scale estimation in template-based short-term single-object tracking, and evaluate two state-of-the-art tracking methods where estimation of scale and position are separable.

    Download full text (pdf)
    fulltext
  • 29.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Efficient active appearance model for real-time head and facial feature tracking2003In: Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, IEEE conference proceedings, 2003, p. 173-180Conference paper (Refereed)
    Abstract [en]

    We address the 3D tracking of pose and animation of the human face in monocular image sequences using active appearance models. The classical appearance-based tracking suffers from two disadvantages: (i) the estimated out-of-plane motions are not very accurate, and (ii) the convergence of the optimization process to desired minima is not guaranteed. We aim at designing an efficient active appearance model, which is able to cope with the above disadvantages by retaining the strengths of feature-based and featureless tracking methodologies. For each frame, the adaptation is split into two consecutive stages. In the first stage, the 3D head pose is recovered using robust statistics and a measure of consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Tracking experiments and method comparison demonstrate the robustness and out-performance of the developed framework.

  • 30.
    Ahlberg, Jörgen
    et al.
    Dept. of IR Systems, Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Dornaika, Fadi
    Computer Vision Center, Universitat Autonoma de Barcelona, Bellaterra, Spain.
    Parametric Face Modeling and Tracking2005In: Handbook of Face Recognition / [ed] Stan Z. Li, Anil K. Jain, Springer-Verlag New York, 2005, p. 65-87Chapter in book (Other academic)
  • 31.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology. Div. of Sensor Technology, Swedish Defence Research Agency, Linköping, Sweden.
    Forchheimer, Robert
    Linköping University, Department of Electrical Engineering, Image Coding. Linköping University, The Institute of Technology.
    Face tracking for model-based coding and face animation2003In: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 13, no 1, p. 8-22Article in journal (Refereed)
    Abstract [en]

    We present a face and facial feature tracking system able to extract animation parameters describing the motion and articulation of a human face in real-time on consumer hardware. The system is based on a statistical model of face appearance and a search algorithm for adapting the model to an image. Speed and robustness is discussed, and the system evaluated in terms of accuracy.

  • 32.
    Ahlberg, Jörgen
    et al.
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Klasén, Lena
    Div. of Sensor Technology, Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Surveillance Systems for Urban Crisis Management2005Conference paper (Other academic)
    Abstract [en]

    We present a concept for combing 3D models and multiple heterogeneous sensors into a surveillance system enabling superior situation awareness. The concept has many military as well as civilian applications. A key issue is the use of a 3D environment model of the area to be surveyed, typically an urban area. In addition to the 3D model, the area of interest is monitored over time using multiple heterogeneous sensors, such as optical, acoustic, and/or seismic sensors. Data and analysis results from the sensors are visualized in the 3D model, thus putting them in a common reference frame and making their spatial and temporal relations obvious. The result is highlighted by an example where data from different sensor systems is integrated in a 3D model of a Swedish urban area.

    Download full text (pdf)
    fulltext
  • 33.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Li, Haibo
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Representing and Compressing MPEG-4 Facial Animation Parameters using Facial Action Basis Functions1999In: IEEE Transactions on Circuits and Systems, ISSN 0098-4094, E-ISSN 1558-1276, Vol. 9, no 3, p. 405-410Article in journal (Refereed)
    Abstract [en]

    In model-based, or semantic, coding, parameters describing the nonrigid motion of objects, e.g., the mimics of a face, are of crucial interest. The facial animation parameters (FAPs) specified in MPEG-4 compose a very rich set of such parameters, allowing a wide range of facial motion. However, the FAPs are typically correlated and also constrained in their motion due to the physiology of the human face. We seek here to utilize this spatial correlation to achieve efficient compression. As it does not introduce any interframe delay, the method is suitable for interactive applications, e.g., videophone and interactive video, where low delay is a vital issue.

  • 34.
    Ahlberg, Jörgen
    et al.
    Termisk Systemteknik AB Linköping, Sweden; Visage Technologies AB Linköping, Sweden.
    Markuš, Nenad
    Human-Oriented Technologies Laboratory, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia.
    Berg, Amanda
    Termisk Systemteknik AB, Linköping, Sweden.
    Multi-person fever screening using a thermal and a visual camera2015Conference paper (Other academic)
    Abstract [en]

    We propose a system to automatically measure the body temperature of persons as they pass. In contrast to exisitng systems, the persons do not need to stop and look into a camera one-by-one. Instead, their eye corners are automatically detected and the temperatures therein measured using a thermal camera. The system handles multiple simultaneous persons and can thus be used where a flow of people pass, such as at airport gates.

    Download full text (pdf)
    fulltext
  • 35.
    Ahlberg, Jörgen
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Glana Sensors AB, Sweden.
    Renhorn, Ingmar
    Glana Sensors AB, Sweden.
    Chevalier, Tomas
    Scienvisic AB, Sweden.
    Rydell, Joakim
    FOI, Swedish Defence Research Agency, Sweden.
    Bergström, David
    FOI, Swedish Defence Research Agency, Sweden.
    Three-dimensional hyperspectral imaging technique2017In: ALGORITHMS AND TECHNOLOGIES FOR MULTISPECTRAL, HYPERSPECTRAL, AND ULTRASPECTRAL IMAGERY XXIII / [ed] Miguel Velez-Reyes; David W. Messinger, SPIE - International Society for Optical Engineering, 2017, Vol. 10198, article id 1019805Conference paper (Refereed)
    Abstract [en]

    Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.

    Download full text (pdf)
    fulltext
  • 36.
    Ahlberg, Jörgen
    et al.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Renhorn, Ingmar G.
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    Wadströmer, Niclas
    Swedish Defence Research Agency (FOI), Linköping, Sweden.
    An information measure of sensor performance and its relation to the ROC curve2010In: Proc. SPIE 7695, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVI / [ed] Sylvia S. Shen; Paul E. Lewis, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7695-72-Conference paper (Refereed)
    Abstract [en]

    The ROC curve is the most frequently used performance measure for detection methods and the underlying sensor configuration. Common problems are that the ROC curve does not present a single number that can be compared to other systems and that no discrimination between sensor performance and algorithm performance is done. To address the first problem, a number of measures are used in practice, like detection rate at a specific false alarm rate, or area-under-curve. For the second problem, we proposed in a previous paper1 an information theoretic method for measuring sensor performance. We now relate the method to the ROC curve, show that it is equivalent to selecting a certain point on the ROC curve, and that this point is easily determined. Our scope is hyperspectral data, studying discrimination between single pixels.

  • 37.
    Ahlberg, Sofie
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Mixed-Initiative Control Synthesis: Estimating an Unknown Task Based on Human Control Input2020In: Proceedings of the 3rd IFAC Workshop on Cyber-Physical & Human Systems,, 2020Conference paper (Refereed)
    Abstract [en]

    In this paper we consider a mobile platform controlled by two entities; an autonomousagent and a human user. The human aims for the mobile platform to complete a task, whichwe will denote as the human task, and will impose a control input accordingly, while not beingaware of any other tasks the system should or must execute. The autonomous agent will in turnplan its control input taking in consideration all safety requirements which must be met, sometask which should be completed as much as possible (denoted as the robot task), as well aswhat it believes the human task is based on previous human control input. A framework for theautonomous agent and a mixed initiative controller are designed to guarantee the satisfaction ofthe safety requirements while both the human and robot tasks are violated as little as possible.The framework includes an estimation algorithm of the human task which will improve witheach cycle, eventually converging to a task which is similar to the actual human task. Hence, theautonomous agent will eventually be able to find the optimal plan considering all tasks and thehuman will have no need to interfere again. The process is illustrated with a simulated example

    Download full text (pdf)
    fulltext
  • 38.
    Ahlman, Gustav
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Improved Temporal Resolution Using Parallel Imaging in Radial-Cartesian 3D functional MRI2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    MRI (Magnetic Resonance Imaging) is a medical imaging method that uses magnetic fields in order to retrieve images of the human body. This thesis revolves around a novel acquisition method of 3D fMRI (functional Magnetic Resonance Imaging) called PRESTO-CAN that uses a radial pattern in order to sample the (kx,kz)-plane of k-space (the frequency domain), and a Cartesian sample pattern in the ky-direction. The radial sample pattern allows for a denser sampling of the central parts of k-space, which contain the most basic frequency information about the structure of the recorded object. This allows for higher temporal resolution to be achieved compared with other sampling methods since a fewer amount of total samples are needed in order to retrieve enough information about how the object has changed over time. Since fMRI is mainly used for monitoring blood flow in the brain, increased temporal resolution means that we can be able to track fast changes in brain activity more efficiently.The temporal resolution can be further improved by reducing the time needed for scanning, which in turn can be achieved by applying parallel imaging. One such parallel imaging method is SENSE (SENSitivity Encoding). The scan time is reduced by decreasing the sampling density, which causes aliasing in the recorded images. The aliasing is removed by the SENSE method by utilizing the extra information provided by the fact that multiple receiver coils with differing sensitivities are used during the acquisition. By measuring the sensitivities of the respective receiver coils and solving an equation system with the aliased images, it is possible to calculate how they would have looked like without aliasing.In this master thesis, SENSE has been successfully implemented in PRESTO-CAN. By using normalized convolution in order to refine the sensitivity maps of the receiver coils, images with satisfying quality was able to be reconstructed when reducing the k-space sample rate by a factor of 2, and images of relatively good quality also when the sample rate was reduced by a factor of 4. In this way, this thesis has been able to contribute to the improvement of the temporal resolution of the PRESTO-CAN method.

    Download full text (pdf)
    Gustav_Ahlman_Examensarbete_SENSE
  • 39.
    Ahlqvist, Axel
    Linköping University, Department of Electrical Engineering, Computer Vision.
    Examining Difficulties in Weed Detection2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Automatic detection of weeds could be used for more efficient weed control in agriculture. In this master thesis, weed detectors have been trained and examined on data collected by RISE to investigate whether an accurate weed detector could be trained on the collected data. When only using annotations of the weed class Creeping thistle for training and evaluation, a detector achieved a mAP of 0.33. When using four classes of weed, a detector was trained with a mAP of 0.07. The performance was worse than in a previous study also dealing with weed detection. Hypotheses for why the performance was lacking were examined. Experiments indicated that the problem could not fully be explained by the model being underfitted, nor by the object’s backgrounds being too similar to the foreground, nor by the quality of the annotations being too low. The performance was better when training the model with as much data as possible than when only selected segments of the data were used.

    Download full text (pdf)
    fulltext
  • 40.
    Ahmadian, Amirhossein
    et al.
    Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering.
    Lindsten, Fredrik
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Department of Computer and Information Science, The Division of Statistics and Machine Learning. Linköping University, Faculty of Science & Engineering.
    Enhancing Representation Learning with Deep Classifiers in Presence of Shortcut2023In: Proceedings of IEEE ICASSP 2023, 2023Conference paper (Refereed)
    Abstract [en]

    A deep neural classifier trained on an upstream task can be leveraged to boost the performance of another classifier in a related downstream task through the representations learned in hidden layers. However, presence of shortcuts (easy-to-learn features) in the upstream task can considerably impair the versatility of intermediate representations and, in turn, the downstream performance. In this paper, we propose a method to improve the representations learned by deep neural image classifiers in spite of a shortcut in upstream data. In our method, the upstream classification objective is augmented with a type of adversarial training where an auxiliary network, so called lens, fools the classifier by exploiting the shortcut in reconstructing images. Empirical comparisons in self-supervised and transfer learning problems with three shortcut-biased datasets suggest the advantages of our method in terms of downstream performance and/or training time.

  • 41.
    Ahmed, Mobyen Uddin
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Altarabichi, Mohammed Ghaith
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Begum, Shahina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ginsberg, Fredrik
    Mälardalen University.
    Glaes, Robert
    Mälardalen University.
    Östgren, Magnus
    Mälardalen University.
    Rahman, Hamidur
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sorensen, Magnus
    Mälardalen University.
    A vision-based indoor navigation system for individuals with visual impairment2019In: International Journal of Artificial Intelligence, E-ISSN 0974-0635, Vol. 17, no 2, p. 188-201Article in journal (Refereed)
    Abstract [en]

    Navigation and orientation in an indoor environment are a challenging task for visually impaired people. This paper proposes a portable vision-based system to provide support for visually impaired persons in their daily activities. Here, machine learning algorithms are used for obstacle avoidance and object recognition. The system is intended to be used independently, easily and comfortably without taking human help. The system assists in obstacle avoidance using cameras and gives voice message feedback by using a pre-trained YOLO Neural Network for object recognition. In other parts of the system, a floor plane estimation algorithm is proposed for obstacle avoidance and fuzzy logic is used to prioritize the detected objects in a frame and generate alert to the user about possible risks. The system is implemented using the Robot Operating System (ROS) for communication on a Nvidia Jetson TX2 with a ZED stereo camera for depth calculations and headphones for user feedback, with the capability to accommodate different setup of hardware components. The parts of the system give varying results when evaluated and thus in future a large-scale evaluation is needed to implement the system and get it as a commercialized product in this area.

  • 42.
    Ahmed, Muhammad
    et al.
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Hashmi, Khurram Azeem
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany .
    Pagani, Alain
    German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Stricker, Didier
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany .
    Afzal, Muhammad Zeshan
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Survey and Performance Analysis of Deep Learning Based Object Detection in Challenging Environments2021In: Sensors, E-ISSN 1424-8220, Vol. 21, no 15Article, review/survey (Refereed)
    Abstract [en]

    Recent progress in deep learning has led to accurate and efficient generic object detection networks. Training of highly reliable models depends on large datasets with highly textured and rich images. However, in real-world scenarios, the performance of the generic object detection system decreases when (i) occlusions hide the objects, (ii) objects are present in low-light images, or (iii) they are merged with background information. In this paper, we refer to all these situations as challenging environments. With the recent rapid development in generic object detection algorithms, notable progress has been observed in the field of deep learning-based object detection in challenging environments. However, there is no consolidated reference to cover the state of the art in this domain. To the best of our knowledge, this paper presents the first comprehensive overview, covering recent approaches that have tackled the problem of object detection in challenging environments. Furthermore, we present a quantitative and qualitative performance analysis of these approaches and discuss the currently available challenging datasets. Moreover, this paper investigates the performance of current state-of-the-art generic object detection algorithms by benchmarking results on the three well-known challenging datasets. Finally, we highlight several current shortcomings and outline future directions.

  • 43.
    Ahmed, Soban
    et al.
    Natl Univ Comp & Emerging Sci, PAK.
    Bhatti, Muhammad Tahir
    Natl Univ Comp & Emerging Sci, PAK.
    Khan, Muhammad Gufran
    Natl Univ Comp & Emerging Sci, PAK.
    Lövström, Benny
    Blekinge Institute of Technology, Faculty of Engineering, Department of Mathematics and Natural Sciences.
    Shahid, Muhammad
    Natl Univ Comp & Emerging Sci, PAK.
    Development and Optimization of Deep Learning Models for Weapon Detection in Surveillance Videos2022In: Applied Sciences, E-ISSN 2076-3417, Vol. 12, no 12, article id 5772Article in journal (Refereed)
    Abstract [en]

    Featured Application This work has applied computer vision and deep learning technology to develop a real-time weapon detector system and tested it on different computing devices for large-scale deployment. Weapon detection in CCTV camera surveillance videos is a challenging task and its importance is increasing because of the availability and easy access of weapons in the market. This becomes a big problem when weapons go into the wrong hands and are often misused. Advances in computer vision and object detection are enabling us to detect weapons in live videos without human intervention and, in turn, intelligent decisions can be made to protect people from dangerous situations. In this article, we have developed and presented an improved real-time weapon detection system that shows a higher mean average precision (mAP) score and better inference time performance compared to the previously proposed approaches in the literature. Using a custom weapons dataset, we implemented a state-of-the-art Scaled-YOLOv4 model that resulted in a 92.1 mAP score and frames per second (FPS) of 85.7 on a high-performance GPU (RTX 2080TI). Furthermore, to achieve the benefits of lower latency, higher throughput, and improved privacy, we optimized our model for implementation on a popular edge-computing device (Jetson Nano GPU) with the TensorRT network optimizer. We have also performed a comparative analysis of the previous weapon detector with our presented model using different CPU and GPU machines that fulfill the purpose of this work, making the selection of model and computing device easier for the users for deployment in a real-time scenario. The analysis shows that our presented models result in improved mAP scores on high-performance GPUs (such as RTX 2080TI), as well as on low-cost edge computing GPUs (such as Jetson Nano) for weapon detection in live CCTV camera surveillance videos.

    Download full text (pdf)
    fulltext
  • 44.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Sazzad
    Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Deep Learning Approach with Data Augmentation to Recognize Facial Expressions in Real Time2022In: Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering: TCCE 2021 / [ed] M. Shamim Kaiser; Kanad Ray; Anirban Bandyopadhyay; Kavikumar Jacob; Kek Sie Long, Springer Nature, 2022, p. 487-500Conference paper (Refereed)
    Abstract [en]

    The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.

  • 45.
    Ahnaf, S.M. Azoad
    et al.
    Computational Color and Spectral Image Analysis Lab, Computer Science and Engineering, Discipline Khulna University, Khulna, Bangladesh.
    Rahaman, G. M. Atiqur
    Computational Color and Spectral Image Analysis Lab, Computer Science and Engineering, Discipline Khulna University, Khulna, Bangladesh.
    Saha, Sajib
    Australian e-health Research Centre, CSIRO, Perth, Australia.
    Understanding CNN's Decision Making on OCT-based AMD Detection2021In: 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), 14-16 Sept. 2021, IEEE, 2021, p. 1-4Conference paper (Refereed)
    Abstract [en]

    Age-related Macular degeneration (AMD) is the third leading cause of incurable acute central vision loss. Optical coherence tomography (OCT) is a diagnostic process used for both AMD and diabetic macular edema (DME) detection. Spectral-domain OCT (SD-OCT), an improvement of traditional OCT, has revolutionized assessing AMD for its high acquiring rate, high efficiency, and resolution. To detect AMD from normal OCT scans many techniques have been adopted. Automatic detection of AMD has become popular recently. The use of a deep Convolutional Neural Network (CNN) has helped its cause vastly. Despite having achieved better performance, CNN models are often criticized for not giving any justification in decision-making. In this paper, we aim to visualize and critically analyze the decision of CNNs in context-based AMD detection. Multiple experiments were done using the DUKE OCT dataset, utilizing transfer learning in Resnet50 and Vgg16 model. After training the model for AMD detection, Gradient-weighted Class Activation Mapping (Grad-Cam) is used for feature visualization. With the feature mapped image, each layer mask was compared. We have found out that the Outer Nuclear layer to the Inner segment myeloid (ONL-ISM) has more predominance about 17.13% for normal and 6.64% for AMD in decision making.

  • 46.
    Ahtiainen, Juhana
    et al.
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Saarinen, Jari
    GIM Ltd., Espoo, Finland.
    Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments2017In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 34, no 3, p. 600-621Article in journal (Refereed)
    Abstract [en]

    Safe and reliable autonomous navigation in unstructured environments remains a challenge for field robots. In particular, operating on vegetated terrain is problematic, because simple purely geometric traversability analysis methods typically classify dense foliage as nontraversable. As traversing through vegetated terrain is often possible and even preferable in some cases (e.g., to avoid executing longer paths), more complex multimodal traversability analysis methods are necessary. In this article, we propose a three-dimensional (3D) traversability mapping algorithm for outdoor environments, able to classify sparsely vegetated areas as traversable, without compromising accuracy on other terrain types. The proposed normal distributions transform traversability mapping (NDT-TM) representation exploits 3D LIDAR sensor data to incrementally expand normal distributions transform occupancy (NDT-OM) maps. In addition to geometrical information, we propose to augment the NDT-OM representation with statistical data of the permeability and reflectivity of each cell. Using these additional features, we train a support-vector machine classifier to discriminate between traversable and nondrivable areas of the NDT-TM maps. We evaluate classifier performance on a set of challenging outdoor environments and note improvements over previous purely geometrical traversability analysis approaches.

  • 47.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    School of Innovation, Design and Engineering, Mälardalen University, Västerås, Sweden.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    A Taxonomy of Factors Influencing Perceived Safety in Human-Robot Interaction2023In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 15, p. 1993-2004Article in journal (Refereed)
    Abstract [en]

    Safety is a fundamental prerequisite that must be addressed before any interaction of robots with humans. Safety has been generally understood and studied as the physical safety of robots in human-robot interaction, whereas how humans perceive these robots has received less attention. Physical safety is a necessary condition for safe human-robot interaction. However, it is not a sufficient condition. A robot that is safe by hardware and software design can still be perceived as unsafe. This article focuses on perceived safety in human-robot interaction. We identified six factors that are closely related to perceived safety based on the literature and the insights obtained from our user studies. The identified factors are the context of robot use, comfort, experience and familiarity with robots, trust, the sense of control over the interaction, and transparent and predictable robot actions. We then made a literature review to identify the robot-related factors that influence perceived safety. Based the literature, we propose a taxonomy which includes human-related and robot-related factors. These factors can help researchers to quantify perceived safety of humans during their interactions with robots. The quantification of perceived safety can yield computational models that would allow mitigating psychological harm.

  • 48.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper (Refereed)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

  • 49.
    Akalin, Neziha
    et al.
    Örebro University, School of Science and Technology.
    Kiselev, Andrey
    Örebro University, School of Science and Technology.
    Kristoffersson, Annica
    Örebro University, School of Science and Technology.
    Loutfi, Amy
    Örebro University, School of Science and Technology.
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security2017In: Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings / [ed] Kheddar, A.; Yoshida, E.; Ge, S.S.; Suzuki, K.; Cabibihan, J-J:, Eyssel, F:, He, H., Springer International Publishing , 2017, p. 628-637Conference paper (Refereed)
    Abstract [en]

    The aim of the study presented in this paper is to develop a quantitative evaluation tool of the sense of safety and security for robots in eldercare. By investigating the literature on measurement of safety and security in human-robot interaction, we propose new evaluation tools. These tools are semantic differential scale questionnaires. In experimental validation, we used the Pepper robot, programmed in the way to exhibit social behaviors, and constructed four experimental conditions varying the degree of the robot’s non-verbal behaviors from no gestures at all to full head and hand movements. The experimental results suggest that both questionnaires (for the sense of safety and the sense of security) have good internal consistency.

    Download full text (pdf)
    An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety and Security
  • 50.
    Akalin, Neziha
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kiselev, Andrey
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Kristoffersson, Annica
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Loutfi, Amy
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    The Relevance of Social Cues in Assistive Training with a Social Robot2018In: 10th International Conference on Social Robotics, ICSR 2018, Proceedings / [ed] Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A., Castro-González, Á., Springer , 2018, p. 462-471Conference paper (Refereed)
    Abstract [en]

    This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

1234567 1 - 50 of 3020
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf