Endre søk
Begrens søket
3456789 251 - 300 of 1716
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 251.
    Bretzner, Lars
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Qualitative Multi-Scale Feature Hierarchies for Object Tracking2000Inngår i: Journal of Visual Communication and Image Representation, ISSN 1047-3203, E-ISSN 1095-9076, Vol. 11, s. 115-129Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper shows how the performance of feature trackers can be improved by building a view-based object representation consisting of qualitative relations between image structures at different scales. The idea is to track all image features individually, and to use the qualitative feature relations for resolving ambiguous matches and for introducing feature hypotheses whenever image features are mismatched or lost. Compared to more traditional work on view-based object tracking, this methodology has the ability to handle semi-rigid objects and partial occlusions. Compared to trackers based on three-dimensional object models, this approach is much simpler and of a more generic nature. A hands-on example is presented showing how an integrated application system can be constructed from conceptually very simple operations.

  • 252.
    Bretzner, Lars
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Qualitative multiscale feature hierarchies for object tracking2000Rapport (Fagfellevurdert)
    Abstract [en]

    This paper shows how the performance of feature trackers can be improved by building a hierarchical view-based object representation consisting of qualitative relations between image structures at different scales. The idea is to track all image features individually and to use the qualitative feature relations for avoiding mismatches, for resolving ambiguous matches, and for introducing feature hypotheses whenever image features are lost. Compared to more traditional work on view-based object tracking, this methodology has the ability to handle semirigid objects and partial occlusions. Compared to trackers based on three-dimensional object models, this approach is much simpler and of a more generic nature. A hands-on example is presented showing how an integrated application system can be constructed from conceptually very simple operations.

  • 253.
    Bretzner, Lars
    et al.
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA.
    Qualitative multi-scale feature hierarchies for object tracking1999Inngår i: Proc Scale-Space Theories in Computer Vision Med, Elsevier, 1999, s. 117-128Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper shows how the performance of feature trackers can be improved by building a view-based object representation consisting of qualitative relations between image structures at different scales. The idea is to track all image features individually, and to use the qualitative feature relations for resolving ambiguous matches and for introducing feature hypotheses whenever image features are mismatched or lost. Compared to more traditional work on view-based object tracking, this methodology has the ability to handle semi-rigid objects and partial occlusions. Compared to trackers based on three-dimensional object models, this approach is much simpler and of a more generic nature. A hands-on example is presented showing how an integrated application system can be constructed from conceptually very simple operations.

  • 254.
    Bretzner, Lars
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Structure and Motion Estimation using Sparse Point and Line Correspondences in Multiple Affine Views1999Rapport (Annet vitenskapelig)
    Abstract [en]

    This paper addresses the problem of computing three-dimen\-sional structure and motion from an unknown rigid configuration of points and lines viewed by an affine projection model. An algebraic structure, analogous to the trilinear tensor for three perspective cameras, is defined for configurations of three centered affine cameras. This centered affine trifocal tensor contains 12 non-zero coefficients and involves linear relations between point correspondences and trilinear relations between line correspondences. It is shown how the affine trifocal tensor relates to the perspective trilinear tensor, and how three-dimensional motion can be computed from this tensor in a straightforward manner. A factorization approach is developed to handle point features and line features simultaneously in image sequences, and degenerate feature configurations are analysed. This theory is applied to a specific problem in human-computer interaction of capturing three-dimensional rotations from gestures of a human hand. This application to quantitative gesture analyses illustrates the usefulness of the affine trifocal tensor in a situation where sufficient information is not available to compute the perspective trilinear tensor, while the geometry requires point correspondences as well as line correspondences over at least three views.

  • 255.
    Bretzner, Lars
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Use your hand as a 3-D mouse or relative orientation from extended sequences of sparse point and line correspondances using the affine trifocal tensor1998Inngår i: Computer Vision — ECCV'98: 5th European Conference on Computer Vision Freiburg, Germany, June, 2–6, 1998 Proceedings, Volume I, Springer Berlin/Heidelberg, 1998, Vol. 1406, s. 141-157Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper addresses the problem of computing three-dimensional structure and motion from an unknown rigid configuration of point and lines viewed by an affine projection model. An algebraic structure, analogous to the trilinear tensor for three perspective cameras, is defined for configurations of three centered affine cameras. This centered affine trifocal tensor contains 12 coefficients and involves linear relations between point correspondences and trilinear relations between line correspondences It is shown how the affine trifocal tensor relates to the perspective trilinear tensor, and how three-dimensional motion can be computed from this tensor in a straightforward manner. A factorization approach is also developed to handle point features and line features simultaneously in image sequences.

    This theory is applied to a specific problem of human-computer interaction of capturing three-dimensional rotations from gestures of a human hand. A qualitative model is presented, in which three fingers are represented by their position and orientation, and it is shown how three point correspondences (blobs at the finger tips) and three line correspondences (ridge features at the fingers) allow the affine trifocal tensor to be determined, from which the rotation is computed. Besides the obvious application, this test problem illustrates the usefulness of the affine trifocal tensor in a situation where sufficient information is not available to compute the perspective trilinear tensor, while the geometry requires point correspondences as well as line correspondences over at least three views.

  • 256. Bricault, Ivan
    et al.
    Zemiti, Nabil
    Jouniaux, Emilie
    Fouard, Celine
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Taillant, Elise
    Dorandeu, Frederic
    Cinquin, Philippe
    A light puncture robot for CT and MRI interventions2008Inngår i: IEEE Engineering in Medicine and Biology Magazine, ISSN 0739-5175, E-ISSN 1937-4186, Vol. 27, nr 3, s. 42-50Artikkel i tidsskrift (Fagfellevurdert)
  • 257.
    Broberg, Patrik
    Högskolan Väst, Institutionen för ingenjörsvetenskap, Avd för process- och produktutveckling.
    Towards Automation of Non-Destructive Testing of Welds2011Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    All welding processes can give rise to defects that will weaken the joint and can lead to failure of the welded structure. Because of this, non-destructive testing (NDT) of welds have become increasingly important to ensure the structural integrity when the material becomes thinner and stronger and welds become smaller; all to reduce weight in order to save material and reduce emissions due to lighter constructions.

    Several NDT methods exists for testing welds and they all have their advantages and disadvantages when it comes to the types and sizes of defects that are detectable, but also in the ability to automate the method. Several methods were compared using common weld defects to determine which method or methods were best suited for automated NDT of welds. The methods compared were radiography, phased array ultrasound, eddy current, thermography and shearography. Phased array ultrasound was deemed most suitable for detecting the weld defects used in the comparison and for automation and was therefore chosen to be used in the continuation of this work. Thermography was shown to be useful for detecting surface defects; something not easily detected using ultrasound. A combination of these techniques will be able to find most weld defects of interest.

    Automation of NDT can be split into two separate areas; mechanisation of the testing and automation of the analysis, both presenting their own difficulties. The problem of mechanising the testing has been solved for simple geometries but for more general welds it will require a more advance system using an industrial robot or similar. Automation of the analysis of phased array ultrasound data consists of detection, sizing, positioning and classification of defects. There are several problems to solve before a completely automatic analysis can be made, including positioning of the data, improving signal quality, segmenting the images and classifying the defects. As a step on the way towards positioning of the data, and thereby easing the analysis, the phase of the signal was studied. It was shown that the phase can be used for finding corners in the image and will also improve the ability to position the corner as compared to using the amplitude of the signal. Further work will have to be done to improve the signal in order to reliably analyse the data automatically.

  • 258.
    Brolund, Hans
    Linköpings universitet, Institutionen för systemteknik, Bildbehandling. Linköpings universitet, Tekniska högskolan.
    Förbättring av fluoroskopibilder2006Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [sv]

    Fluoroskopi är benämningen på kontinuerlig röntgengenomlysning av en patient. Eftersom patienten och även läkaren då utsätts för kontinuerlig röntgenstrålning måste strålningsdosen hållas låg, vilket leder till brusiga bilder. Det är därför önskvärt att genom bildbehandling förbättra bilderna. Bildförbättringen måste dock ske i realtid och därför kan inte konventionella metoder användas.

    Detta examensarbete avser att undersöka hur ortogonala s k. derivataoperatorer kan användas för att förbättra läsbarheten av fluoroskopibilder med hjälp av brusundertryckning och kantförstärkning. Derivataoperatorer är separerbara vilket gör dem extremt beräkningsvänliga och lätta att infoga i en skalpyramid. Skalpyramiden ger möjlighet att processa strukturer och detaljer av olika storlek var för sig samtidigt som nedsamplingsmekanismen gör att denna uppdelning inte nämnvärt ökar beräkningsbördan. I den fullständiga lösningen införes också struktur-/brusseparering för att förhindra förstärkning av och undertrycka bidrag från de frekvensband där en pixel domineras av brus.

    Resultaten visar att brus verkligen kan undertryckas medan kanter och linjer bevaras bra eller förstärkes om så önskas. Den riktade filtreringen gör dock att det lätt uppstår maskliknande strukturer i bruset, men detta kan undvikas med rätt parameterinställning av struktur-/brussepareringen. Förhållandet mellan riktad och icke-riktad filtrering är likaledes styrbart via en parameter som kan optimeras med hänsyn till behov och önskemål vid varje tillämpning.

  • 259.
    Brolund, Per
    Linköpings universitet, Institutionen för systemteknik.
    Forensisk längdmätning i bilder2006Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [sv]

    Detta examensarbete undersöker forensisk längdmätning i bild, t ex längduppskattning av människor i bilder rörande brottsmål. Problemen identifieras och några av dagens befintliga längdmätningsmetoder diskuteras.

    Den metod som bäst uppfyller de i arbetet ställda kraven, d v s snabb handläggning, minimal systeminformation, minimalt arbete på plats och exakthet, har valts ut, anpassats och utvärderats. Metoden bygger på att hitta s k gränspunkter och grundplanets gränslinje i bilden och utifrån en i världen känd referenslängd beräkna den sökta längden. Den bakomliggande teorin presenteras och metoden beskrivs i detalj. Funktioner, algoritmer och ett användargränssnitt har implementerats i beräkningsprogrammet MatLab. Tester har utförts för att validera metodens noggrannhet och parameterberoende. Metoden visar sig ge mycket bra resultat då rätt förutsättningar ges, men har konstaterats vara känslig för variation på gränslinjen. En rad förbättringsförslag presenteras för att utveckla metoden och stabilisera resultatet.

    Examensarbetet omfattar 20 högskolepoäng och utgör ett obligatoriskt moment i utbildningsprogrammet civilingenjör i datateknik som ges av Linköpings universitet. Arbetet är utfört vid och på uppdrag av Statens kriminaltekniska laboratorium (SKL) i Linköping.

  • 260.
    Brorson, Erik
    Uppsala universitet, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Samhällsvetenskapliga fakulteten, Statistiska institutionen.
    Classifying Hate Speech using Fine-tuned Language Models2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Given the explosion in the size of social media, the amount of hate speech is also growing. To efficiently combat this issue we need reliable and scalable machine learning models. Current solutions rely on crowdsourced datasets that are limited in size, or using training data from self-identified hateful communities, that lacks specificity. In this thesis we introduce a novel semi-supervised modelling strategy. It is first trained on the freely available data from the hateful communities and then fine-tuned to classify hateful tweets from crowdsourced annotated datasets. We show that our model reach state of the art performance with minimal hyper-parameter tuning.

  • 261.
    Brucker, Manuel
    et al.
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Durner, Maximilian
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Ambrus, Rares
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH Royal Inst Technol, Ctr Autonomous Syst, SE-10044 Stockholm, Sweden..
    Marton, Zoltan Csaba
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Wendt, Axel
    Robert Bosch, Corp Res, St Joseph, MI USA.;Robert Bosch, Corp Res, Gerlingen, Germany..
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH Royal Inst Technol, Ctr Autonomous Syst, SE-10044 Stockholm, Sweden..
    Arras, Kai O.
    Robert Bosch, Corp Res, St Joseph, MI USA.;Robert Bosch, Corp Res, Gerlingen, Germany..
    Triebel, Rudolph
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany.;Tech Univ Munich, Dep Comp Sci, Munich, Germany..
    Semantic Labeling of Indoor Environments from 3D RGB Maps2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 1871-1878Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.

  • 262.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys.
    Extending Distance Computation - Propagating Derivatives2010Inngår i: Proceedings SSBA 2010 / [ed] Cris Luengo and Milan Gavrilovic, Uppsala: Centre for Image Analysis , 2010, s. 39-42Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    In this paper we present a technique to extend distance computation  algorithms that compute global distances from a series of local  updates. This includes algorithms such as the fast marching method  (FMM) and the chamfering algorithm for weighted distances. In  addition to the value of a distance function or distance map, we  derive formulas to compute the gradient and higher order partial  derivatives of the distance function within the same framework. The  approach is based on symbolic differentiation of the update scheme,  which makes it general and straight forward to apply to almost any  distance computation scheme. The main result is a novel set of  ``derivative maps'' that are computed along with the ordinary  distance maps. Apart from the theory itself, these maps and this  technique may be used to compute skeletons and parameterizations  such as Riemannian Normal Coordinates and Gauss Normal Coordinates.

  • 263.
    Brun, Anders
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Knutsson, Hans
    Department of Medical Engineering, Linköpings Universitet.
    Geodesic Glyph Warping2008Inngår i: Proceedings of SSBA, Lund, Sweden: SSBA , 2008Konferansepaper (Annet vitenskapelig)
  • 264.
    Brun, Anders
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Knutsson, Hans
    Linköpings Universitet.
    Tensor Glyph Warping: Visualizing Metric Tensor Fields using Riemannian Exponential Maps2009Inngår i: Visualization and Processing of Tensor Fields: Advances and Perspectives / [ed] David Laidlaw, Joachim Weickert, Berlin Heidelberg: Springer , 2009, XVII, s. 139-160Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    The Riemannian exponential map, and its inverse the Riemannian logarithm map, can be used to visualize metric tensor fields. In this chapter we first derive the well-known metric sphere glyph from the geodesic equation, where the tensor field to be visualized is regarded as the metric of a manifold. These glyphs capture the appearance of the tensors relative to the coordinate system of the human observer. We then introduce two new concepts for metric tensor field visualization: geodesic spheres and geodesically warped glyphs. These extensions make it possible not only to visualize tensor anisotropy, but also the curvature and change in tensor-shape in a local neighborhood. The framework is based on the exp p (v i ) and log p (q) maps, which can be computed by solving a second-order ordinary differential equation (ODE) or by manipulating the geodesic distance function. The latter can be found by solving the eikonal equation, a nonlinear partial differential equation (PDE), or it can be derived analytically for some manifolds. To avoid heavy calculations, we also include first- and second-order Taylor approximations to exp and log. In our experiments, these are shown to be sufficiently accurate to produce glyphs that visually characterize anisotropy, curvature, and shape-derivatives in sufficiently smooth tensor fields where most glyphs are relatively similar in size.

  • 265.
    Brun, Anders
    et al.
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Centre for Image Analysis, SLU, Uppsala, Sweden.
    Martin-Fernandez, Marcos
    Universidad de Valladolid Laboratorio de Procesado de Imagen (LPI), Dept. Teoría de la Señal y Comunicaciones e Ingeniería Telemática Spain.
    Acar, Burac
    Boğaziçi University 5 Electrical & Electronics Engineering Department Istanbul Turkey.
    Munoz-Moreno, Emma
    Universidad de Valladolid Laboratorio de Procesado de Imagen (LPI), Dept. Teoría de la Señal y Comunicaciones e Ingeniería Telemática Spain.
    Cammoun, Leila
    Signal Processing Institute (ITS), Ecole Polytechnique Fédérale Lausanne (EPFL) Lausanne Switzerland.
    Sigfridsson, Andreas
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV. Center for Technology in Medicine, Dept. Señales y Comunicaciones, University of Las Palmas de Gran Canaria, Spain.
    Sosa-Cabrera, Dario
    Center for Technology in Medicine, Dept. Señales y Comunicaciones, University of Las Palmas de Gran Canaria, Spain.
    Svensson, Björn
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Herberthson, Magnus
    Linköpings universitet, Matematiska institutionen, Tillämpad matematik. Linköpings universitet, Tekniska högskolan.
    Knutsson, Hans
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan. Linköpings universitet, Centrum för medicinsk bildvetenskap och visualisering, CMIV.
    Similar Tensor Arrays - A Framework for Storage of Tensor Array Data2009Inngår i: Tensors in Image Processing and Computer Vision / [ed] Santiago Aja-Fern´andez, Rodrigo de Luis Garc´ıa, Dacheng Tao, Xuelong Li, Springer Science+Business Media B.V., 2009, 1, s. 407-428Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    This chapter describes a framework for storage of tensor array data, useful to describe regularly sampled tensor fields. The main component of the framework, called Similar Tensor Array Core (STAC), is the result of a collaboration between research groups within the SIMILAR network of excellence. It aims to capture the essence of regularly sampled tensor fields using a minimal set of attributes and can therefore be used as a “greatest common divisor” and interface between tensor array processing algorithms. This is potentially useful in applied fields like medical image analysis, in particular in Diffusion Tensor MRI, where misinterpretation of tensor array data is a common source of errors. By promoting a strictly geometric perspective on tensor arrays, with a close resemblance to the terminology used in differential geometry, (STAC) removes ambiguities and guides the user to define all necessary information. In contrast to existing tensor array file formats, it is minimalistic and based on an intrinsic and geometric interpretation of the array itself, without references to other coordinate systems.

  • 266.
    Brun, Anders
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys.
    Martin-Fernandez, Marcos
    Dept. Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Spain.
    Acar, Burak
    Munoz-Moreno, Emma
    Cammoun, Leila
    Signal Processing Institute (ITS), Ecole Polytechnique Fédérale Lausanne (EPFL), Lausanne, Switzerland.
    Sigfridsson, Andreas
    Division of Medical Informatics, Department of Biomedical Engineering, Linköping University, Linköping, Spain.
    Sosa-Cabrera, Dario
    Center for Technology in Medicine, Dept. Señales y Comunicaciones, University of Las Palmas de Gran Canaria, Spain.
    Svensson, jörn
    Dept. of biomedical Engineering, Linköpings Universitet.
    Herberthson, Magnus
    Dept. of mathematics, linköpings universitet.
    Knutsson, Hans
    Dept of biomedical engineering, Linköpings universitet.
    Similar Tensor Arrays: A Framework for Storage of Tensor Array Data2009Inngår i: Tensors in Image Processing and Computer Vision, London: Springer , 2009, 1, s. 407-428Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Abstract This chapter describes a framework for storage of tensor array data, useful to describe regularly sampled tensor fields. The main component of the framework, called Similar Tensor Array Core (STAC), is the result of a collaboration between research groups within the SIMILAR network of excellence. It aims to capture the essence of regularly sampled tensor fields using a minimal set of attributes and can therefore be used as a “greatest common divisor” and interface between tensor array processing algorithms. This is potentially useful in applied fields like medical image analysis, in particular in Diffusion Tensor MRI, where misinterpretation of tensor array data is a common source of errors. By promoting a strictly geometric perspective on tensor arrays, with a close resemblance to the terminology used in differential geometry, (STAC) removes ambiguities and guides the user to define all necessary information. In contrast to existing tensor array file formats, it is minimalistic and based on an intrinsic and geometric interpretation of the array itself, without references to other coordinate systems.

  • 267.
    Brunnström, Kjell
    et al.
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA.
    Eklundh, Jan-Olof
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    On Scale and Resolution in the Analysis of Local Image Structure1990Inngår i: Proc. 1st European Conf. on Computer Vision, 1990, Vol. 427, s. 3-12Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Focus-of-attention is extremely important in human visual perception. If computer vision systems are to perform tasks in a complex, dynamic world they will have to be able to control processing in a way that is analogous to visual attention in humans.

    In this paper we will investigate problems in connection with foveation, that is examining selected regions of the world at high resolution. We will especially consider the problem of finding and classifying junctions from this aspect. We will show that foveation as simulated by controlled, active zooming in conjunction with scale-space techniques allows robust detection and classification of junctions.

  • 268.
    Brunnström, Kjell
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Eklundh, Jan-Olof
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Scale and Resolution in Active Analysis of Local Image Structure1990Inngår i: Image and Vision Computing, Vol. 8, s. 289-296Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Focus-of-attention is extremely important in human visual perception. If computer vision systems are to perform tasks in a complex, dynamic world they will have to be able to control processing in a way that is analogous to visual attention in humans. Problems connected to foveation (examination of selected regions of the world at high resolution) are examined. In particular, the problem of finding and classifying junctions from this aspect is considered. It is shown that foveation as simulated by controlled, active zooming in conjunction with scale-space techniques allows for robust detection and classification of junctions.

  • 269.
    Brunnström, Kjell
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lindeberg, Tony
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Eklundh, Jan-Olof
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Active detection and classification of junctions by foveation with a head-eye system guided by the scale-space primal sketch1992Inngår i: Computer Vision — ECCV'92: Second European Conference on Computer Vision Santa Margherita Ligure, Italy, May 19–22, 1992 Proceedings / [ed] Guilo Sandini, Springer Berlin/Heidelberg, 1992, s. 701-709Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We consider how junction detection and classification can be performed in an active visual system. This is to exemplify that feature detection and classification in general can be done by both simple and robust methods, if the vision system is allowed to look at the world rather than at prerecorded images. We address issues on how to attract the attention to salient local image structures, as well as on how to characterize those.

  • 270.
    Bruno, Barbara
    et al.
    University of Genova, Genova, Italy.
    Chong, Nak Young
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kamide, Hiroko
    Nagoya University, Nagoya, Japan.
    Kanoria, Sanjeev
    Advinia Health Care Limited LTD, London, UK.
    Lee, Jaeryoung
    Chubu University, Kasugai, Japan.
    Lim, Yuto
    Japan Advanced Institute of Science and Technology, Nomi [Ishikawa], Japan.
    Kumar Pandey, Amit
    SoftBank Robotics.
    Papadopoulos, Chris
    University of Bedfordshire, Luton, UK.
    Papadopoulos, Irena
    Middlesex University Higher Education Corporation, London, UK.
    Pecora, Federico
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Saffiotti, Alessandro
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sgorbissa, Antonio
    University of Genova, Genova, Italy.
    Paving the Way for Culturally Competent Robots: a Position Paper2017Inngår i: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) / [ed] Howard, A; Suzuki, K; Zollo, L, New York: Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 553-560Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cultural competence is a well known requirementfor an effective healthcare, widely investigated in thenursing literature. We claim that personal assistive robotsshould likewise be culturally competent, aware of generalcultural characteristics and of the different forms they take indifferent individuals, and sensitive to cultural differences whileperceiving, reasoning, and acting. Drawing inspiration fromexisting guidelines for culturally competent healthcare and thestate-of-the-art in culturally competent robotics, we identifythe key robot capabilities which enable culturally competentbehaviours and discuss methodologies for their developmentand evaluation.

  • 271.
    Bujack, Roxana
    et al.
    Leipzig University, Leipzig, Germany.
    Hotz, Ingrid
    German Aerospace Center, Braunschweig, Germany..
    Scheuermann, Gerik
    Leipzig University, Leipzig, Germany.
    Hitzer, E.
    Christian University, Tokyo, Japan.
    Moment Invariants for 2D Flow Fields via Normalization in Detail2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The analysis of 2D flow data is often guided by the search for characteristic structures with semantic meaning. One way to approach this question is to identify structures of interest by a human observer, with the goal of finding similar structures in the same or other datasets. The major challenges related to this task are to specify the notion of similarity and define respective pattern descriptors. While the descriptors should be invariant to certain transformations, such as rotation and scaling, they should provide a similarity measure with respect to other transformations, such as deformations. In this paper, we propose to use moment invariants as pattern descriptors for flow fields. Moment invariants are one of the most popular techniques for the description of objects in the field of image recognition. They have recently also been applied to identify 2D vector patterns limited to the directional properties of flow fields. Moreover, we discuss which transformations should be considered for the application to flow analysis. In contrast to previous work, we follow the intuitive approach of moment normalization, which results in a complete and independent set of translation, rotation, and scaling invariant flow field descriptors. They also allow to distinguish flow features with different velocity profiles. We apply the moment invariants in a pattern recognition algorithm to a real world dataset and show that the theoretical results can be extended to discrete functions in a robust way.

  • 272.
    Burdakov, Oleg
    et al.
    Linköpings universitet, Matematiska institutionen, Optimeringslära. Linköpings universitet, Tekniska högskolan.
    Doherty, Patrick
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Kvarnström, Jonas
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Local Search for Hop-constrained Directed Steiner Tree Problem with Application to UAV-based Multi-target Surveillance2014Rapport (Annet vitenskapelig)
    Abstract [en]

    We consider the directed Steiner tree problem (DSTP) with a constraint on the total number of arcs (hops) in the tree. This problem is known to be NP-hard, and therefore, only heuristics can be applied in the case of its large-scale instances.   For the hop-constrained DSTP, we propose local search strategies aimed at improving any heuristically produced initial Steiner tree. They are based on solving a sequence of hop-constrained shortest path problems for which we have recently developed ecient label correcting algorithms.   The presented approach is applied to nding suitable 3D locations where unmanned aerial vehicles (UAVs) can be placed to relay information gathered in multi-target monitoring and surveillance. The eciency of our algorithms is illustrated by results of numerical experiments involving problem instances with up to 40 000 nodes and up to 20 million arcs.

  • 273.
    Burdakov, Oleg
    et al.
    Linköpings universitet, Matematiska institutionen, Optimeringslära. Linköpings universitet, Tekniska högskolan.
    Doherty, Patrick
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Kvarnström, Jonas
    Linköpings universitet, Institutionen för datavetenskap, Artificiell intelligens och integrerade datorsystem. Linköpings universitet, Tekniska högskolan.
    Optimal Scheduling for Replacing Perimeter Guarding Unmanned Aerial Vehicles2014Rapport (Annet vitenskapelig)
    Abstract [en]

    Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality.

  • 274.
    Burenius, Magnus
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Human 3D Pose Estimation in the Wild: using Geometrical Models and Pictorial Structures2013Doktoravhandling, med artikler (Annet vitenskapelig)
  • 275.
    Burenius, Magnus
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sullivan, Josephine
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Motion Capture from Dynamic Orthographic Cameras2011Inngår i: 4DMOD - 1st IEEE Workshop on Dynamic Shape Capture and Analysis, 2011Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an extension to the scaled orthographic camera model. It deals with dynamic cameras looking at faraway objects. The camera is allowed to change focal lengthand translate and rotate in 3D. The model we derive saysthat this motion can be treated as scaling, translation androtation in a 2D image plane. It is valid if the camera and itstarget move around in two separate regions that are smallcompared to the distance between them.We show two applications of this model to motion capture applications at large distances, i.e. outside a studio,using the affine factorization algorithm. The model is usedto motivate theoretically why the factorization can be carried out in a single batch step, when having both dynamiccameras and a dynamic object. Furthermore, the model isused to motivate how the position of the object can be reconstructed by measuring the virtual 2D motion of the cameras. For testing we use videos from a real football gameand reconstruct the 3D motion of a footballer as he scoresa goal.

  • 276.
    Burenius, Magnus
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sullivan, Josephine
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Halvorsen, Kjartan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Human 3D Motion Computation from a varying Number of Cameras2011Inngår i: Image Analysis, Springer Berlin / Heidelberg , 2011, s. 24-35Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper focuses on how the accuracy of marker-less human motion capture is affected by the number of camera views used. Specifically, we compare the 3D reconstructions calculated from single and multiple cameras. We perform our experiments on data consisting of video from multiple cameras synchronized with ground truth 3D motion, obtained from a motion capture session with a professional footballer. The error is compared for the 3D reconstructions, of diverse motions, estimated using the manually located image joint positions from one, two or three cameras. We also present a new bundle adjustment procedure using regression splines to impose weak prior assumptions about human motion, temporal smoothness and joint angle limits, on the 3D reconstruction. The results show that even under close to ideal circumstances the monocular 3D reconstructions contain visual artifacts not present in the multiple view case, indicating accurate and efficient marker-less human motion capture requires multiple cameras.

  • 277.
    Burger, Birgitta
    et al.
    Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music, University of Jyväskylä.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Communication of Musical Expression by Means of Mobile Robot Gestures2010Inngår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 3, nr 1, s. 109-118Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians’ movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.

  • 278.
    Butepage, Judith
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Black, Michael J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Deep representation learning for human motion prediction and classification2017Inngår i: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), IEEE, 2017, s. 1591-1599Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.

  • 279.
    Butepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH Royal Inst Technol, CSC, Robot Percept & Learning Lab RPL, Stockholm, Sweden..
    Kjellström, Hedvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH Royal Inst Technol, CSC, Robot Percept & Learning Lab RPL, Stockholm, Sweden..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH Royal Inst Technol, CSC, Robot Percept & Learning Lab RPL, Stockholm, Sweden..
    Anticipating many futures: Online human motion prediction and generation for human-robot interaction2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE COMPUTER SOC , 2018, s. 4563-4570Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.

  • 280.
    Byström, Anna
    et al.
    Swedish University of Agricultural Sciences, Department of Anatomy, Physiology and Biochemistry.
    Roepstorff, Lars
    Swedish University of Agricultural Sciences, Department of Anatomy, Physiology and Biochemistry.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Image Analysis of Saddle Pressure Data2011Konferansepaper (Annet vitenskapelig)
  • 281.
    Bäck, David
    Linköpings universitet, Institutionen för medicinsk teknik, Medicinsk informatik. Linköpings universitet, Tekniska högskolan.
    Neural Network Gaze Tracking using Web Camera2006Independent thesis Basic level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Gaze tracking means to detect and follow the direction in which a person looks. This can be used in for instance human-computer interaction. Most existing systems illuminate the eye with IR-light, possibly damaging the eye. The motivation of this thesis is to develop a truly non-intrusive gaze tracking system, using only a digital camera, e.g. a web camera.

    The approach is to detect and track different facial features, using varying image analysis techniques. These features will serve as inputs to a neural net, which will be trained with a set of predetermined gaze tracking series. The output is coordinates on the screen.

    The evaluation is done with a measure of accuracy and the result is an average angular deviation of two to four degrees, depending on the quality of the image sequence. To get better and more robust results, a higher image quality from the digital camera is needed.

  • 282.
    Bäckström, Nils
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Designing a Lightweight Convolutional Neural Network for Onion and Weed Classification2018Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The data set for this project consists of images containing onion and weed samples. It is of interest to investigate if Convolutional Neural Networks can learn to classify the crops correctly as a step in automatizing weed removal in farming. The aim of this project is to solve a classification task involving few classes with relatively few training samples (few hundred per class). Usually, small data sets are prone to overfitting, meaning that the networks generalize bad to unseen data. It is also of interest to solve the problem using small networks with low computational complexity, since inference speed is important and memory often is limited on deployable systems. This work shows how transfer learning, network pruning and quantization can be used to create lightweight networks whose classification accuracy exceeds the same architecture trained from scratch. Using these techniques, a SqueezeNet v1.1 architecture (which is already a relatively small network) can reach 1/10th of the original model size and less than half MAC operations during inference, while still maintaining a higher classification accuracy compared to a SqueezeNet v1.1 trained from scratch (96.9±1.35% vs 92.0±3.11% on 5-fold cross validation)

  • 283.
    Båberg, Fredrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Petter, Ögren
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma systen, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017Inngår i: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, artikkel-id 8088131Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 284.
    Börlin, Niclas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Bundle adjustment with and without damping2013Inngår i: Photogrammetric Record, ISSN 0031-868X, E-ISSN 1477-9730, Vol. 28, nr 144, s. 396-415Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The least squares adjustment (LSA) method is studied as an optimisation problem and shown to be equivalent to the undamped Gauss-Newton (GN) optimisation method. Three problem-independent damping modifications of the GN method are presented: the line-search method of Armijo (GNA); the Levenberg-Marquardt algorithm (LM); and Levenberg-Marquardt-Powell (LMP). Furthermore, an additional problem-specific "veto" damping technique, based on the chirality condition, is suggested. In a perturbation study on a terrestrial bundle adjustment problem the GNA and LMP methods with veto damping can increase the size of the pull-in region compared to the undamped method; the LM method showed less improvement. The results suggest that damped methods can, in many cases, provide a solution where undamped methods fail and should be available in any LSA software package. Matlab code for the algorithms discussed is available from the authors.

  • 285.
    Börlin, Niclas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Camera Calibration using the Damped Bundle Adjustment Toolbox2014Inngår i: ISPRS Annals - Volume II-5, 2014: ISPRS Technical Commission V Symposium 23–25 June 2014, Riva del Garda, Italy / [ed] F. Remondino and F. Menna, Copernicus GmbH , 2014, Vol. II-5, s. 89-96Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Camera calibration is one of the fundamental photogrammetric tasks. The standard procedure is to apply an iterative adjustment to measurements of known control points. The iterative adjustment needs initial values of internal and external parameters. In this paper we investigate a procedure where only one parameter - the focal length is given a specific initial value. The procedure is validated using the freely available Damped Bundle Adjustment Toolbox on five calibration data sets using varying narrow- and wide-angle lenses. The results show that the Gauss-Newton-Armijo and Levenberg-Marquardt-Powell bundle adjustment methods implemented in the toolbox converge even if the initial values of the focal length are between 1/2 and 32 times the true focal length, even if the parameters are highly correlated. Standard statistical analysis methods in the toolbox enable manual selection of the lens distortion parameters to estimate, something not available in other camera calibration toolboxes. A standardised camera calibration procedure that does not require any information about the camera sensor or focal length is suggested based on the convergence results. The toolbox source and data sets used in this paper are available from the authors.

  • 286.
    Börlin, Niclas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Experiments with Metadata-derived Initial Values and Linesearch Bundle Adjustment in Architectural Photogrammetry2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    According to the Waldhäusl and Ogleby (1994) "3x3 rules", a well-designed close-range architetural photogrammetric project should include a sketch of the project site with the approximate position and viewing direction of each image. This orientation metadata is important to determine which part of the object each image covers. In principle, the metadata could be used as initial values for the camera external orientation (EO) parameters. However, this has rarely been used, partly due to convergence problem for the bundle adjustment procedure.

    In this paper we present a photogrammetric reconstruction pipeline based on classical methods and investigate if and how the linesearch bundle algorithms of Börlin et al. (2004) and/or metadata can be used to aid the reconstruction process in architectural photogrammetry when the classical methods fail. The primary initial values for the bundle are calculated by the five-point algorithm by Nistér (Stewénius et al., 2006). Should the bundle fail, initial values derived from metadata are calculated and used for a second bundle attempt.

    The pipeline was evaluated on an image set of the INSA building in Strasbourg. The data set includes mixed convex and non-convex subnetworks and a combination of manual and automatic measurements.

    The results show that, in general, the classical bundle algorithm with five-point initial values worked well. However, in cases where it did fail, linesearch bundle and/or metadata initial values did help. The presented approach is interesting for solving EO problems when the automatic orientation processes fail as well as to simplify keeping a link between the metadata containing the plan of how the project should have become and the actual reconstructed network as it turned out to be.

  • 287.
    Caccamo, Sergio
    et al.
    KTH.
    Ataer-Cansizoglu, Esra
    Taguchi, Y.
    Joint 3D reconstruction of a static scene and moving objects2018Inngår i: Proceedings - 2017 International Conference on 3D Vision, 3DV 2017, Institute of Electrical and Electronics Engineers (IEEE), 2018, s. 677-685Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene. An RGB-D frame is represented as a collection of features, which are points and planes. We classify the features into static and dynamic regions and grow separate maps, static and object maps, for each of them. To robustly classify the features in each frame, we fuse multiple RANSAC-based registration results obtained by registering different groups of the features to different maps, including (1) all the features to the static map, (2) all the features to each object map, and (3) subsets of the features, each forming a segment, to each object map. This multi-group registration approach is designed to overcome the following challenges: scenes can be dominated by static regions, making object tracking more difficult; and moving object might have larger pose variation between frames compared to the static regions. We show qualitative results from indoor scenes with objects in various shapes. The technique enables on-The-fly object model generation to be used for robotic manipulation.

  • 288.
    Caccamo, Sergio
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Güler, Püren
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics2016Inngår i: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, s. 530-537Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Exploring and modeling heterogeneous elastic surfaces requires multiple interactions with the environment and a complex selection of physical material parameters. The most common approaches model deformable properties from sets of offline observations using computationally expensive force-based simulators. In this work we present an online probabilistic framework for autonomous estimation of a deformability distribution map of heterogeneous elastic surfaces from few physical interactions. The method takes advantage of Gaussian Processes for constructing a model of the environment geometry surrounding a robot. A fast Position-based Dynamics simulator uses focused environmental observations in order to model the elastic behavior of portions of the environment. Gaussian Process Regression maps the local deformability on the whole environment in order to generate a deformability distribution map. We show experimental results using a PrimeSense camera, a Kinova Jaco2 robotic arm and an Optoforce sensor on different deformable surfaces.

  • 289.
    Caccamo, Sergio
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Parasuraman, Ramviyas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Båberg, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Extending a UGV Teleoperation FLC Interface with Wireless Network Connectivity Information2015Inngår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEE , 2015, s. 4305-4312Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Teleoperated Unmanned Ground Vehicles (UGVs) are expected to play an important role in future search and rescue operations. In such tasks, two factors are crucial for a successful mission completion: operator situational awareness and robust network connectivity between operator and UGV. In this paper, we address both these factors by extending a new Free Look Control (FLC) operator interface with a graphical representation of the Radio Signal Strength (RSS) gradient at the UGV location. We also provide a new way of estimating this gradient using multiple receivers with directional antennas. The proposed approach allows the operator to stay focused on the video stream providing the crucial situational awareness, while controlling the UGV to complete the mission without moving into areas with dangerously low wireless connectivity. The approach is implemented on a KUKA youBot using commercial-off-the-shelf components. We provide experimental results showing how the proposed RSS gradient estimation method performs better than a difference approximation using omnidirectional antennas and verify that it is indeed useful for predicting the RSS development along a UGV trajectory. We also evaluate the proposed combined approach in terms of accuracy, precision, sensitivity and specificity.

  • 290.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, UK..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2018Inngår i: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 291.
    Cammoun, Leila
    et al.
    Signal Processing Institute Ecole Polytechnique Fédérale de, Lausanne, Switzerland.
    Castaño-Moraga, Carlos Alberto
    Department of Signals and Communciations, University of Las Palmas de Gran Canaria, Spain.
    Muñoz-Moreno, Emma
    Univ. de Valladolid, Spain.
    Sosa-Cabrera, Dario
    Canary Islands Institute of Technology, Spain.
    Acar, Burak
    Electrical-Electronics Eng. Dept, Bogazici University, Istanbul, Turkey.
    Brun, Anders
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Centrum för bildanalys. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Datoriserad bildanalys.
    Knutsson, Hans
    Dept. of medical engineering, Linköpings universitet.
    Thiran, Jean-Philippe
    Signal Processing Institute Ecole Polytechnique Fédérale de, Lausanne, Switzerland.
    A Review of Tensors and Tensor Signal Processing2009Inngår i: Tensors in Image Processing and Computer Vision / [ed] Santiago Aja-Fernandez, Rodrigo de Luis Garcia, Dacheng Tao, Xuelong Li, London: Springer , 2009, 1, s. 1-32Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    Tensors have been broadly used in mathematics and physics, since they are a generalization of scalars or vectors and allow to represent more complex properties. In this chapter we present an overview of some tensor applications, especially those focused on the image processing field. From a mathematical point of view, a lot of work has been developed about tensor calculus, which obviously is more complex than scalar or vectorial calculus. Moreover, tensors can represent the metric of a vector space, which is very useful in the field of differential geometry. In physics, tensors have been used to describe several magnitudes, such as the strain or stress of materials. In solid mechanics, tensors are used to define the generalized Hooke’s law, where a fourth order tensor relates the strain and stress tensors. In fluid dynamics, the velocity gradient tensor provides information about the vorticity and the strain of the fluids. Also an electromagnetic tensor is defined, that simplifies the notation of the Maxwell equations. But tensors are not constrained to physics and mathematics. They have been used, for instance, in medical imaging, where we can highlight two applications: the diffusion tensor image, which represents how molecules diffuse inside the tissues and is broadly used for brain imaging; and the tensorial elastography, which computes the strain and vorticity tensor to analyze the tissues properties. Tensors have also been used in computer vision to provide information about the local structure or to define anisotropic image filters.

  • 292.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Schaffernicht, Erik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Davison, Andrew J.
    Department of Computing, Imperial College London, London, United Kingdom.
    Compressed Voxel-Based Mapping Using Unsupervised Learning2017Inngår i: Robotics, E-ISSN 2218-6581, Vol. 6, nr 3, artikkel-id 15Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content.

  • 293.
    Canelhas, Daniel R.
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Stoyanov, Todor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Lilienthal, Achim J.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    From Feature Detection in Truncated Signed Distance Fields to Sparse Stable Scene Graphs2016Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, Vol. 1, nr 2, s. 1148-1155Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    With the increased availability of GPUs and multicore CPUs, volumetric map representations are an increasingly viable option for robotic applications. A particularly important representation is the truncated signed distance field (TSDF) that is at the core of recent advances in dense 3D mapping. However, there is relatively little literature exploring the characteristics of 3D feature detection in volumetric representations. In this paper we evaluate the performance of features extracted directly from a 3D TSDF representation. We compare the repeatability of Integral invariant features, specifically designed for volumetric images, to the 3D extensions of Harris and Shi & Tomasi corners. We also study the impact of different methods for obtaining gradients for their computation. We motivate our study with an example application for building sparse stable scene graphs, and present an efficient GPU-parallel algorithm to obtain the graphs, made possible by the combination of TSDF and 3D feature points. Our findings show that while the 3D extensions of 2D corner-detection perform as expected, integral invariants have shortcomings when applied to discrete TSDFs. We conclude with a discussion of the cause for these points of failure that sheds light on possible mitigation strategies.

  • 294.
    Carlsson, Mattias
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Neural Networks for Semantic Segmentation in the Food Packaging Industry2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Industrial applications of computer vision often utilize traditional image processing techniques whereas state-of-the-art methods in most image processing challenges are almost exclusively based on convolutional neural networks (CNNs). Thus there is a large potential for improving the performance of many machine vision applications by incorporating CNNs.

    One such application is the classification of juice boxes with straws, where the baseline solution uses classical image processing techniques on depth images to reject or accept juice boxes. This thesis aim to investigate how CNNs perform on the task of semantic segmentation (pixel-wise classification) of said images and if the result can be used to increase classification performance.

    A drawback of CNNs is that they usually require large amounts of labelled data for training to be able to generalize and learn anything useful. As labelled data is hard to come by, two ways to get cheap data are investigated, one being synthetic data generation and the other being automatic labelling using the baseline solution.

    The implemented network performs well on semantic segmentation, even when trained on synthetic data only, though the performance increases with the ratio of real (automatically labelled) to synthetic images. The classification task is very sensitive to small errors in semantic segmentation and the results are therefore not as good as the baseline solution. It is suspected that the drop in performance between validation and test data is due to a domain shift between the data sets, e.g. variations in data collection and straw and box type, and fine-tuning to the target domain could definitely increase performance.

    When trained on synthetic data the domain shift is even larger and the performance on classification is next to useless. It is likely that the results could be improved by using more advanced data generation, e.g. a generative adversarial network (GAN), or more rigorous modelling of the data.

  • 295. Carlsson, Stefan
    et al.
    Azizpour, Hossein
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sharif Razavian, Ali
    Sullivan, Josephine
    Smith, Kevin
    The preimage of rectifier network activities2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We give a procedure for explicitly computing the complete preimage of activities of a layer in a rectifier network with fully connected layers, from knowledge of the weights in the network. The most general characterisation of preimages is as piecewise linear manifolds in the input space with possibly multiple branches. This work therefore complements previous demonstrations of preimages obtained by heuristic optimisation and regularization algorithms Mahendran & Vedaldi (2015; 2016) We are presently empirically evaluating the procedure and it’s ability to extract complete preimages as well as the general structure of preimage manifolds.

  • 296.
    Castellano, Ginevra
    et al.
    InfoMus Lab, DIST, University of Genova.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Camurri, Antonio
    InfoMus Lab, DIST, University of Genova.
    Volpe, Gualtiero
    InfoMus Lab, DIST, University of Genova.
    Expressive Control of Music and Visual Media by Full-Body Movement2007Inngår i: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME '07, New York, NY, USA: ACM Press, 2007, s. 390-391Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we describe a system which allows users to use their full-body for controlling in real-time the generation of an expressive audio-visual feedback. The system extracts expressive motion features from the user’s full-body movements and gestures. The values of these motion features are mapped both onto acoustic parameters for the real-time expressive rendering ofa piece of music, and onto real-time generated visual feedback projected on a screen in front of the user.

  • 297. Castellano, Ginevra
    et al.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Camurri, Antonio
    Volpe, Gualtiero
    User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements2007Inngår i: Affective Computing and Intelligent Interaction / [ed] Paiva, Ana; Prada, Rui; Picard, Rosalind W., Berlin / Heidelberg: Springer Berlin/Heidelberg, 2007, s. 501-510Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user’s full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.

  • 298.
    Castellano, Ginevra
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Leite, Iolanda
    Univ Tecn Lisboa, INESC ID, Oporto, Portugal.; Univ Tecn Lisboa, Inst Super Tecn, Oporto, Portugal..
    Paiva, Ana
    Univ Tecn Lisboa, INESC ID, Oporto, Portugal.; Univ Tecn Lisboa, Inst Super Tecn, Oporto, Portugal..
    Detecting perceived quality of interaction with a robot using contextual features2017Inngår i: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 41, nr 5, s. 1245-1261Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This work aims to advance the state of the art in exploring the role of task, social context and their interdependencies in the automatic prediction of affective and social dimensions in human-robot interaction. We explored several SVMs-based models with different features extracted from a set of context logs collected in a human-robot interaction experiment where children play a chess game with a social robot. The features include information about the game and the social context at the interaction level (overall features) and at the game turn level (turn-based features). While overall features capture game and social context at the interaction level, turn-based features attempt to encode the dependencies of game and social context at each turn of the game. Results showed that game and social context-based features can be successfully used to predict dimensions of quality of interaction with the robot. In particular, overall features proved to perform equally or better than turn-based features, and game context-based features more effective than social context-based features. Our results show that the interplay between game and social context-based features, combined with features encoding their dependencies, lead to higher recognition performances for a subset of dimensions.

  • 299.
    Ceco, Ema
    Linköpings universitet, Institutionen för systemteknik, Datorseende.
    Image Analysis in the Field of Oil Contamination Monitoring2011Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Monitoring wear particles in lubricating oils allows specialists to evaluate thehealth and functionality of a mechanical system. The main analysis techniquesavailable today are manual particle analysis and automatic optical analysis. Man-ual particle analysis is effective and reliable since the analyst continuously seeswhat is being counted . The drawback is that the technique is quite time demand-ing and dependent of the skills of the analyst. Automatic optical particle countingconstitutes of a closed system not allowing for the objects counted to be observedin real-time. This has resulted in a number of sources of error for the instrument.In this thesis a new method for counting particles based on light microscopywith image analysis is proposed. It has proven to be a fast and effective methodthat eliminates the sources of error of the previously described methods. Thenew method correlates very well with manual analysis which is used as a refer-ence method throughout this study. Size estimation of particles and detectionof metallic particles has also shown to be possible with the current image analy-sis setup. With more advanced software and analysis instrumentation, the imageanalysis method could be further developed to a decision based machine allowingfor declarations about which wear mode is occurring in a mechanical system.

  • 300.
    Cedernaes, Erasmus
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion.
    Runway detection in LWIR video: Real time image processing and presentation of sensor data2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Runway detection in long wavelength infrared (LWIR) video could potentially increase the number of successful landings by increasing the situational awareness of pilots and verifying a correct approach. A method for detecting runways in LWIR video was therefore proposed and evaluated for robustness, speed and FPGA acceleration.

    The proposed algorithm improves the detection probability by making assumptions of the runway appearance during approach, as well as by using a modified Hough line transform and a symmetric search of peaks in the accumulator that is returned by the Hough line transform.

    A video chain was implemented on a Xilinx ZC702 Development card with input and output via HDMI through an expansion card. The video frames were buffered to RAM, and the detection algorithm ran on the CPU, which however did not meet the real-time requirement. Strategies were proposed that would improve the processing speed by either acceleration in hardware or algorithmic changes.

3456789 251 - 300 of 1716
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf