Change search
Refine search result
30313233 1601 - 1636 of 1636
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1601.
    Åhlén, Julia
    Uppsala University, Interfaculty Units, Centre for Image Analysis.
    Colour Correction of Underwater Images Using Spectral Data2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    For marine sciences sometimes there is a need to perform underwater photography. Optical properties of light cause severe quality problems for underwater photography. Light of different energies is absorbed at highly different rates under water causing significant bluishness of the images. If the colour dependent attenuation under water can be properly estimated it should be possible to use computerised image processing to colour correct digital images using Beer’s Law.

    In this thesis we have developed such estimation and correction methods that have become progressively more complicated and more accurate giving successively better correction results. A process of estimation of downwelling attenuation coefficients from multi or hyper spectral data is a basis for automatic colour restoration of underwater taken images. The results indicate that for each diving site the unique and precise coefficients can be obtained.

    All standard digital cameras have built in white balancing and colour enhancement functions designed to make the images as aesthetically pleasing as possible. These functions can in most cameras not be switched off and the algorithms used are proprietary and undocumented. However, these enhancement functions can be estimated. Applying their reverse creates un-enhanced images and we show that our algorithms for underwater colour correction works significantly better when applied to such images.

    Finally, we have developed a method that uses point spectra from the spectrometer together with RGB colour images from a camera to generate pseudo-hyper-spectral images. Each of these can then be colour corrected. Finally, the images can be weighted together in the proportions needed to create new correct RGB images. This method is somewhat computationally demanding but gives very encouraging results.

    The algorithms and applications presented in this thesis show that automatic colour correction of underwater images can increase the credibility of data taken underwater for marine scientific purposes.

  • 1602. Åhlén, Julia
    et al.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Automatic Water Body Extraction From Remote Sensing Images Using Entropy2015In: SGEM2015 Conference Proceedings, 2015, Vol. 2, p. 517-524Conference paper (Refereed)
    Abstract [en]

    This research focuses on automatic extraction of river banks and other inland waters from remote sensing images. There are no up to date accessible databases of rivers and most of other waters objects for modelling purposes. The main reason for that is that some regions are hard to access with the traditional ground through techniques and thus the boundary of river banks are uncertain in many geographical positions. The other reason is the limitations of widely applied method for extraction of water bodies called normalized-difference water index (NDWI). There is a novel approach to extract water bodies, which is based on pixel level variability or entropy, however, the methods work somewhat satisfactory on high spatial resolution images, there is no verification of the method performance on moderate or low resolution images. Problems encounter identification of mixed water pixels and e.g. roads, which are built in attachment to river banks and thus can be classified as rivers. In this work we propose an automatic extraction of river banks using image entropy, combined with NDWI identification. In this study only moderate spatial resolution Landsat TM are tested. Areas of interest include both major river banks and inland lakes. Calculating entropy on such poor spatial resolution images will lead to misinterpretation of water bodies, which all exhibits the same small variation of pixel values as e.g. some open or urban areas. Image entropy thus is calculated with the modification that involves the incorporation of local normalization index or variability coefficient. NDWI will produce an image where clear water exhibits large difference comparing to other land features. We are presenting an algorithm that uses an NDWI prior to entropy processing, so that bands used to calculate it, are chosen in clear connection to water body features that are clearly discernible.As a result we visualize a clear segmentation of the water bodies from the remote sensing images and verify the coordinates with a given geographic reference.

  • 1603.
    Åhlén, Julia
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Land management, GIS.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Uppsala University, Department of Information Technology, Sweden .
    Automatic water body extraction from remote sensing images using entropy2015In: Proceedings of the International Multidisciplinary Scientific GeoConference SGEM, 2015, Vol. 4, p. 517-524Conference paper (Refereed)
    Abstract [en]

    This research focuses on automatic extraction of river banks and other inland waters from remote sensing images. There are no up to date accessible databases of rivers and most of other waters objects for modelling purposes. The main reason for that is that some regions are hard to access with the traditional ground through techniques and thus the boundary of river banks are uncertain in many geographical positions. The other reason is the limitations of widely applied method for extraction of water bodies called normalized-difference water index (NDWI). There is a novel approach to extract water bodies, which is based on pixel level variability or entropy, however, the methods work somewhat satisfactory on high spatial resolution images, there is no verification of the method performance on moderate or low resolution images. Problems encounter identification of mixed water pixels and e.g. roads, which are built in attachment to river banks and thus can be classified as rivers. In this work we propose an automatic extraction of river banks using image entropy, combined with NDWI identification. In this study only moderate spatial resolution Landsat TM are tested. Areas of interest include both major river banks and inland lakes. Calculating entropy on such poor spatial resolution images will lead to misinterpretation of water bodies, which all exhibits the same small variation of pixel values as e.g. some open or urban areas. Image entropy thus is calculated with the modification that involves the incorporation of local normalization index or variability coefficient. NDWI will produce an image where clear water exhibits large difference comparing to other land features. We are presenting an algorithm that uses an NDWI prior to entropy processing, so that bands used to calculate it, are chosen in clear connection to water body features that are clearly discernible.As a result we visualize a clear segmentation of the water bodies from the remote sensing images and verify the coordinates with a given geographic reference.

  • 1604.
    Åhlén, Julia
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Urban and regional planning/GIS-institute.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Early Recognition of Smoke in Digital Video2010In: Advances in Communications, Computers, Systems, Circuits and Devices: European Conference of Systems, ECS'10, European Conference of Circuits Technology and Devices, ECCTD'10, European Conference of Communications, ECCOM'10, ECCS'10 / [ed] Mladenov, V; Psarris, K; Mastorakis, N; Caballero, A; Vachtsevanos, G, Athens: World Scientific and Engineering Academy and Society, 2010, p. 301-306Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for direct smoke detection from video without enhancement pre-processing steps. Smoke is characterized by transparency, gray color and irregularities in motion, which are hard to describe with the basic image features. A method for robust smoke description using a color balancing algorithm and turbulence calculation is presented in this work. Background extraction is used as a first step in processing. All moving objects are candidates for smoke. We make use of Gray World algorithm and compare the results with the original video sequence in order to extract image features within some particular gray scale interval. As a last step we calculate shape complexity of turbulent phenomena and apply it to the incoming video stream. As a result we extract only smoke from the video. Features such shadows, illumination changes and people will not be mistaken for smoke by the algorithm. This method gives an early indication of smoke in the observed scene.

  • 1605.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Knowledge Based Single Building Extraction and Recognition2014In: Proceedings WSEAS International Conference on Computer Engineering and Applications, 2014, 2014, p. 29-35Conference paper (Refereed)
    Abstract [en]

    Building facade extraction is the primary step in the recognition process in outdoor scenes. It is also achallenging task since each building can be viewed from different angles or under different lighting conditions. Inoutdoor imagery, regions, such as sky, trees, pavement cause interference for a successful building facade recognition.In this paper we propose a knowledge based approach to automatically segment out the whole facade or majorparts of the facade from outdoor scene. The found building regions are then subjected to recognition process. Thesystem is composed of two modules: segmentation of building facades region module and facade recognition module.In the facade segmentation module, color processing and objects position coordinates are used. In the facaderecognition module, Chamfer metrics are applied. In real time recognition scenario, the image with a building isfirst analyzed in order to extract the facade region, which is then compared to a database with feature descriptors inorder to find a match. The results show that the recognition rate is dependent on a precision of building extractionpart, which in turn, depends on a homogeneity of colors of facades.

  • 1606.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    TIME-SPACE VISUALISATION OF AMUR RIVER CHANNEL CHANGES DUE TO FLOODING DISASTER2014In: Proceedings of International Multidisciplinary Scientific GeoScience Conference (SGEM), 2014, 2014Conference paper (Refereed)
    Abstract [en]

    The analysis of flooding levels is a highly complex temporal and spatial assessment task that involves estimation of distances between references in geographical space as well as estimations of instances along the time-line that coincide with given spatial locations. This work has an aim to interactively explore changes of Amur River boundaries caused by the severe flooding in September 2013. In our analysis of river bank changes we use satellite imagery (Landsat 7) to extract parts belonging to Amur River. We use imagery from that covers time interval July 2003 until February 2014. Image data is pre-processed using low level image processing techniques prior to visualization. Pre-processing has a purpose to extract information about the boundaries of the river, and to transform it into a vectorized format, suitable as inputs subsequent visualization. We develop visualization tools to explore the spatial and temporal relationship in the change of river banks. In particular the visualization shall allow for exploring specific geographic locations and their proximity to the river/floods at arbitrary times. We propose a time space visualization that emanates from edge detection, morphological operations and boundary statistics on Landsat 2D imagery in order to extract the borders of Amur River. For the visualization we use the time-spacecube metaphor. It is based on a 3D rectilinear context, where the 2D geographical coordinate system is extended with a time-axis pointing along the 3rd Cartesian axis. Such visualization facilitates analysis of the channel shape of Amur River and thus enabling for a conclusion regarding the defined problem. As a result we demonstrate our time-space visualization for river Amur and using some amount of geographical point data as a reference we suggest an adequate method of interpolation or imputation that can be employed to estimate value at a given location and time.

  • 1607.
    Åhlén, Julia
    et al.
    Högskolan i Gävle.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Liu, Fei
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Evaluation of the Automatic methods for Building Extraction2014In: International Journal of Computers and Communications, ISSN 2074-1294, Vol. 8, p. 171-176Article in journal (Refereed)
  • 1608.
    Åhlén Julia, Sundgren David
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bottom Reflectance Influence on a Color Correction Algorithm for Underwater Images2003In: 13th Scandinavian Conference, SCIA 2003 Göteborg, Sweden, June 29-July 2, 2003, 2003, p. 922-926Conference paper (Refereed)
  • 1609.
    Åhlén, Julia
    et al.
    Uppsala universitet.
    Sundgren, David
    Stockholms universitet.
    Bottom Reflectance Influence on a Color Correction Algorithm for Underwater Images2003In: Proceedings of the 13th Scandinavinan Conference on Image Analysis / [ed] Bigun, J., Gustavsson, T., Berlin: Springer , 2003, p. 922-926Conference paper (Refereed)
    Abstract [en]

    Diminishing the negative effects of water column introduced on digital underwater images is the aim of a color correction algorithm presented by the authors in a previous paper. The present paper describes an experimental result and set of calculations for determining the impact of bottom reflectance on the algorithm's performance. This concept is based on the estimation of the relative reflectance of various bottom types such as sand, bleached corals and algae. We describe the adverse effects of extremely low and high bottom reflectances on the algorithm.

  • 1610.
    Åhlén, Julia
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Sundgren, David
    KTH.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Pre-Processing of Underwater Images Taken in shallow Water for Color Reconstruction Purposes2005In: IASTED Proceeding (479): IASTED 7th Conference on Signal and Image Processing - 2005, 2005Conference paper (Refereed)
    Abstract [en]

    Coral reefs are monitored with different techniques in or der to examine their health. Digital cameras, which pro vide an economically defendable tool for marine scientists to collect underwater data, tend to produce bluish images due to severe absorption of light at longer wavelengths. In this paper we study the possibilities of correcting for this color distortion through image processing. The decrease of red light by depth can be predicted by Beer’s Law. An other parameter that has been taken into account is the image enhancement functions built into the camera. We use a spectrometer and a reflectance standard to obtain the data needed to approximate the joint effect of these func tions. This model is used to pre-process the underwater images taken by digital cameras so that the red, green and blue channels show correct values before the images are subjected to correction for the effects of the water column through application of Beer’s Law. This process is fully automatic and the amount of processed images is limited only by the speed of computer system. Experimental re sults show that the proposed method works well for cor recting images taken at different depths with two different cameras.

  • 1611.
    Åhlén, Julia
    et al.
    University of Gävle, Department of Mathematics, Natural and Computer Sciences, Ämnesavdelningen för datavetenskap.
    Sundgren, David
    University of Gävle, Department of Mathematics, Natural and Computer Sciences, Ämnesavdelningen för matematik och statistik.
    Bengtsson, Ewert
    Pre-Processing of Underwater Images Taken in Shallow Waters for Color Reconstruction Purposes2005In: Proceedings of the 7th IASTED International Conference on Signal and Image Processing, 2005Conference paper (Refereed)
    Abstract [en]

    Coral reefs are monitored with different techniques in order to examine their health. Digital cameras, which provide an economically defendable tool for marine scientists to collect underwater data, tend to produce bluish images due to severe absorption of light at longer wavelengths. In this paper we study the possibilities of correcting for this color distortion through image processing. The decrease of red light by depth can be predicted by Beer's law. Another parameter that has to be taken into account is the image enhancement functions built into the camera. We use a spectrometer and a reflectance standard to obtain the data needed to approximate the joint effect of these functions. This model is used to pre-process the underwater images taken by digital cameras so that the red, green and blue channels show correct values before the images are subjected to correction for the effects of water column through application of Beer's law. This process is fully automatic and the amount of processed images is limited only by the speed of computer system. Experimental results show that the proposed method works well for correcting images taken at different depths with two different cameras.

  • 1612.
    Åhlén, Julia
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Sundgren, David
    KTH.
    Lindell, Tommy
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Dissolved Organic Matters Impact on Colour2005In: Image Analysis: 14th Scandinavian Conference, SCIA 2005, 2005, p. 1148-1156Conference paper (Refereed)
    Abstract [en]

    The natural properties of water column usually affect under-water

    imagery by suppressing high-energy light. In application such as

    color correction of underwater images estimation of water column parameters is crucial. Diffuse attenuation coefficients are estimated and used for further processing of underwater taken data. The coefficients will give information on how fast light of different wavelengths decreases with increasing depth. Based on the exact depth measurements and data from a spectrometer the calculation of downwelling irradiance will be done. Chlorophyll concentration and a yellow substance factor contribute to a great variety of values of attenuation coefficients at different depth. By taking advantage of variations in depth, a method is presented to

    estimate the in uence of dissolved organic matters and chlorophyll on color correction. Attenuation coefficients that depends on concentration of dissolved organic matters in water gives an indication on how well any spectral band is suited for color correction algorithm.

  • 1613.
    Åkerlind, Christina
    et al.
    Linköping University, Department of Physics, Chemistry and Biology. Linköping University, Faculty of Science & Engineering. FOI, Linköping, Sweden.
    Fagerström, Jan
    FOI, Linköping, Sweden.
    Hallberg, Tomas
    FOI, Linköping, Sweden.
    Kariis, Hans
    FOI, Linköping, Sweden.
    Evaluation criteria for spectral design of camouflage2015In: Proc. SPIE 9653, Target and Background Signatures / [ed] Karin U. Stein; Ric H. M. A. Schleijpen, SPIE - International Society for Optical Engineering, 2015, Vol. 9653, p. Art.no: 9653-2-Conference paper (Refereed)
    Abstract [en]

    In development of visual (VIS) and infrared (IR) camouflage for signature management, the aim is the design of surface properties of an object to spectrally match or adapt to a background and thereby minimizing the contrast perceived by a threatening sensor. The so called 'ladder model" relates the requirements for task measure of effectiveness with surface structure properties through the steps signature effectiveness and object signature. It is intended to link materials properties via platform signature to military utility and vice versa. Spectral design of a surface intends to give it a desired wavelength dependent optical response to fit a specific application of interest. Six evaluation criteria were stated, with the aim to aid the process to put requirement on camouflage and for evaluation. The six criteria correspond to properties such as reflectance, gloss, emissivity, and degree of polarization as well as dynamic properties, and broadband or multispectral properties. These criteria have previously been exemplified on different kinds of materials and investigated separately. Anderson and Åkerlind further point out that the six criteria rarely were considered or described all together in one and same publication previously. The specific level of requirement of the different properties must be specified individually for each specific situation and environment to minimize the contrast between target and a background. The criteria or properties are not totally independent of one another. How they are correlated is part of the theme of this paper. However, prioritization has been made due to the limit of space. Therefore all of the interconnections between the six criteria will not be considered in the work of this report. The ladder step previous to digging into the different material composition possibilities and choice of suitable materials and structures (not covered here), includes the object signature and decision of what the spectral response should be, when intended for a specific environment. The chosen spectral response should give a low detection probability (DP). How detection probability connects to image analysis tools and implementation of the six criteria is part of this work.

  • 1614.
    Åström, Freddie
    et al.
    Heidelberg University, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Baravdish, George
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Mapping-Based Image Diffusion2017In: Journal of Mathematical Imaging and Vision, ISSN 0924-9907, E-ISSN 1573-7683, Vol. 57, no 3, p. 293-323Article in journal (Refereed)
    Abstract [en]

    In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods.

  • 1615.
    Åström, Freddie
    et al.
    Heidelberg Collaboratory for Image Processing Heidelberg University Heidelberg, Germany.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Scharr, Hanno
    BG-2: Plant Sciences Forschungszentrum Jülich 52425, Jülich, Germany.
    Adaptive sharpening of multimodal distributions2015In: Colour and Visual Computing Symposium (CVCS), 2015 / [ed] Marius Pedersen and Jean-Baptiste Thomas, IEEE , 2015, p. 1-4Conference paper (Refereed)
    Abstract [en]

    In this work we derive a novel framework rendering measured distributions into approximated distributions of their mean. This is achieved by exploiting constraints imposed by the Gauss-Markov theorem from estimation theory, being valid for mono-modal Gaussian distributions. It formulates the relation between the variance of measured samples and the so-called standard error, being the standard deviation of their mean. However, multi-modal distributions are present in numerous image processing scenarios, e.g. local gray value or color distributions at object edges, or orientation or displacement distributions at occlusion boundaries in motion estimation or stereo. Our method not only aims at estimating the modes of these distributions together with their standard error, but at describing the whole multi-modal distribution. We utilize the method of channel representation, a kind of soft histogram also known as population codes, to represent distributions in a non-parametric, generic fashion. Here we apply the proposed scheme to general mono- and multimodal Gaussian distributions to illustrate its effectiveness and compliance with the Gauss-Markov theorem.

  • 1616.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Adaptive Supervision Online Learning for Vision Based Autonomous Systems2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Driver assistance systems in modern cars now show clear steps towards autonomous driving and improvements are presented in a steady pace. The total number of sensors has also decreased from the vehicles of the initial DARPA challenge, more resembling a pile of sensors with a car underneath. Still, anyone driving a tele-operated toy using a video link is a demonstration that a single camera provides enough information about the surronding world.  

    Most lane assist systems are developed for highway use and depend on visible lane markers. However, lane markers may not be visible due to snow or wear, and there are roads without lane markers. With a slightly different approach, autonomous road following can be obtained on almost any kind of road. Using realtime online machine learning, a human driver can demonstrate driving on a road type unknown to the system and after some training, the system can seamlessly take over. The demonstrator system presented in this work has shown capability of learning to follow different types of roads as well as learning to follow a person. The system is based solely on vision, mapping camera images directly to control signals.  

    Such systems need the ability to handle multiple-hypothesis outputs as there may be several plausible options in similar situations. If there is an obstacle in the middle of the road, the obstacle can be avoided by going on either side. However the average action, going straight ahead, is not a viable option. Similarly, at an intersection, the system should follow one road, not the average of all roads.  

    To this end, an online machine learning framework is presented where inputs and outputs are represented using the channel representation. The learning system is structurally simple and computationally light, based on neuropsychological ideas presented by Donald Hebb over 60 years ago. Nonetheless the system has shown a cabability to learn advanced tasks. Furthermore, the structure of the system permits a statistical interpretation where a non-parametric representation of the joint distribution of input and output is generated. Prediction generates the conditional distribution of the output, given the input.  

    The statistical interpretation motivates the introduction of priors. In cases with multiple options, such as at intersections, a prior can select one mode in the multimodal distribution of possible actions. In addition to the ability to learn from demonstration, a possibility for immediate reinforcement feedback is presented. This allows for a system where the teacher can choose the most appropriate way of training the system, at any time and at her own discretion.  

    The theoretical contributions include a deeper analysis of the channel representation. A geometrical analysis illustrates the cause of decoding bias commonly present in neurologically inspired representations, and measures to counteract it. Confidence values are analyzed and interpreted as evidence and coherence. Further, the use of the truncated cosine basis function is motivated.  

    Finally, a selection of applications is presented, such as autonomous road following by online learning and head pose estimation. A method founded on the same basic principles is used for visual tracking, where the probabilistic representation of target pixel values allows for changes in target appearance.

  • 1617.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    LEAP, A Platform for Evaluation of Control Algorithms2010Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Most people are familiar with the BRIO labyrinth game and the challenge of guiding the ball through the maze. The goal of this project was to use this game to create a platform for evaluation of control algorithms. The platform was used to evaluate a few different controlling algorithms, both traditional automatic control algorithms as well as algorithms based on online incremental learning.

    The game was fitted with servo actuators for tilting the maze. A camera together with computer vision algorithms were used to estimate the state of the game. The evaluated controlling algorithm had the task of calculating a proper control signal, given the estimated state of the game.

    The evaluated learning systems used traditional control algorithms to provide initial training data. After initial training, the systems learned from their own actions and after a while they outperformed the controller used to provide initial training.

  • 1618.
    Öfjäll, Kristoffer
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning for Robot Vision2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35].

    Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods.

    This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.

  • 1619.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Biologically Inspired Online Learning of Visual Autonomous Driving2014In: Proceedings British Machine Vision Conference 2014 / [ed] Michel Valstar; Andrew French; Tony Pridmore, BMVA Press , 2014, p. 137-156Conference paper (Refereed)
    Abstract [en]

    While autonomously driving systems accumulate more and more sensors as well as highly specialized visual features and engineered solutions, the human visual system provides evidence that visual input and simple low level image features are sufficient for successful driving. In this paper we propose extensions (non-linear update and coherence weighting) to one of the simplest biologically inspired learning schemes (Hebbian learning). We show that this is sufficient for online learning of visual autonomous driving, where the system learns to directly map low level image features to control signals. After the initial training period, the system seamlessly continues autonomously. This extended Hebbian algorithm, qHebb, has constant bounds on time and memory complexity for training and evaluation, independent of the number of training samples presented to the system. Further, the proposed algorithm compares favorably to state of the art engineered batch learning algorithms.

  • 1620.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game2012In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2012, 2012Conference paper (Other academic)
    Abstract [en]

    The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behavior is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position.

    The general dynamics of the system can easily be handled by traditional automatic control methods. Taking the obstacles and uneven surface into account would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As thelearning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller.

    A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to steer the ball through the maze.

  • 1621.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Integrating Learning and Optimization for Active Vision Inverse Kinematics2013In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2013, 2013Conference paper (Other academic)
  • 1622.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Online Learning and Mode Switching for Autonomous Driving from Demonstration2014In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2014, 2014Conference paper (Other academic)
  • 1623.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Online learning of autonomous driving using channel representations of multi-modal joint distributions2015In: Proceedings of SSBA, Swedish Symposium on Image Analysis, 2015, Swedish Society for automated image analysis , 2015Conference paper (Other academic)
  • 1624.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Online Learning of Vision-Based Robot Control during Autonomous Operation2015In: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, p. 137-156Chapter in book (Refereed)
    Abstract [en]

    Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

  • 1625.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Weighted Update and Comparison for Channel-Based Distribution Field Tracking2015In: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT II, Springer, 2015, Vol. 8926, p. 218-231Conference paper (Refereed)
    Abstract [en]

    There are three major issues for visual object trackers: modelrepresentation, search and model update. In this paper we address thelast two issues for a specic model representation, grid based distributionmodels by means of channel-based distribution elds. Particularly weaddress the comparison part of searching. Previous work in the areahas used standard methods for comparison and update, not exploitingall the possibilities of the representation. In this work we propose twocomparison schemes and one update scheme adapted to the distributionmodel. The proposed schemes signicantly improve the accuracy androbustness on the Visual Object Tracking (VOT) 2014 Challenge dataset.

  • 1626.
    Öfjäll, Kristoffer
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Robinson, Andreas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Visual Autonomous Road Following by Symbiotic Online Learning2016In: Intelligent Vehicles Symposium (IV), 2016 IEEE, 2016, p. 136-143Conference paper (Refereed)
    Abstract [en]

    Recent years have shown great progress in driving assistance systems, approaching autonomous driving step by step. Many approaches rely on lane markers however, which limits the system to larger paved roads and poses problems during winter. In this work we explore an alternative approach to visual road following based on online learning. The system learns the current visual appearance of the road while the vehicle is operated by a human. When driving onto a new type of road, the human driver will drive for a minute while the system learns. After training, the human driver can let go of the controls. The present work proposes a novel approach to online perception-action learning for the specific problem of road following, which makes interchangeably use of supervised learning (by demonstration), instantaneous reinforcement learning, and unsupervised learning (self-reinforcement learning). The proposed method, symbiotic online learning of associations and regression (SOLAR), extends previous work on qHebb-learning in three ways: priors are introduced to enforce mode selection and to drive learning towards particular goals, the qHebb-learning methods is complemented with a reinforcement variant, and a self-assessment method based on predictive coding is proposed. The SOLAR algorithm is compared to qHebb-learning and deep learning for the task of road following, implemented on a model RC-car. The system demonstrates an ability to learn to follow paved and gravel roads outdoors. Further, the system is evaluated in a controlled indoor environment which provides quantifiable results. The experiments show that the SOLAR algorithm results in autonomous capabilities that go beyond those of existing methods with respect to speed, accuracy, and functionality. 

  • 1627.
    Öfverstedt, Johan
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Sladoje, Nataša
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Distance between vector-valued fuzzy sets based on intersection decomposition with applications in object detection2017In: Mathematical Morphology and its Applications to Signal and Image Processing, Springer, 2017, Vol. 10225, p. 395-407Conference paper (Refereed)
    Abstract [en]

    We present a novel approach to measuring distance between multi-channel images, suitably represented by vector-valued fuzzy sets. We first apply the intersection decomposition transformation, based on fuzzy set operations, to vector-valued fuzzy representations to enable preservation of joint multi-channel properties represented in each pixel of the original image. Distance between two vector-valued fuzzy sets is then expressed as a (weighted) sum of distances between scalar-valued fuzzy components of the transformation. Applications to object detection and classification on multi-channel images and heterogeneous object representations are discussed and evaluated subject to several important performance metrics. It is confirmed that the proposed approach outperforms several alternative single-and multi-channel distance measures between information-rich image/ object representations.

  • 1628.
    Ögren, Petter
    et al.
    Mech. & Aerosp. Eng. Dept., Princeton Univ., NJ, USA.
    Leonard, Naomi Ehrich
    Mech. & Aerosp. Eng. Dept., Princeton Univ., NJ, USA.
    A Convergent Dynamic Window Approach to Obstacle Avoidance2005In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 21, no 2, p. 188-195Article in journal (Refereed)
    Abstract [en]

    The dynamic window approach (DWA) is a well-known navigation scheme developed by Fox et al. and extended by Brock and Khatib. It is safe by construction, and has been shown to perform very efficiently in experimental setups. However, one can construct examples where the proposed scheme fails to attain the goal configuration. What has been lacking is a theoretical treatment of the algorithm's convergence properties. Here we present such a treatment by merging the ideas of the DWA with the convergent, but less performance-oriented, scheme suggested by Rimon and Koditschek. Viewing the DWA as a model predictive control (MPC) method and using the control Lyapunov function (CLF) framework of Rimon and Koditschek, we draw inspiration from an MPC/CLF framework put forth by Primbs to propose a version of the DWA that is tractable and convergent.

  • 1629.
    Ögren, Petter
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robinson, John W.C.
    Swedish Defence Research Agency (FOI), Department of Aeronautics .
    A Model Based Approach to Modular Multi-Objective Robot Control2011In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 63, no 2, p. 257-282Article in journal (Refereed)
    Abstract [en]

    Two broad classes of robot controllers are the modular, and the model based approaches. The modular approaches include the Reactive or Behavior Based designs. They do not rely on mathematical system models, but are easy to design, modify and extend. In the model based approaches, a model is used to design a single controller with verifiable system properties. The resulting designs are however often hard to extend, without jeopardizing the previously proven properties. This paper describes an attempt to narrow the gap between the flexibility of the modular approaches, and the predictability of the model based approaches, by proposing a modular design that does the combination, or arbitration, of the different modules in a model based way. By taking the (model based) time derivatives of scalar, Lyapunov-like, objective functions into account, the arbitration module can keep track of the time evolution of the objectives. This enables it to handle objective tradeoffs in a predictable way by finding controls that preserve an important objective that is currently met, while striving to satisfy another, less important one that is not yet achieved. To illustrate the approach a UAV control problem from the literature is solved, resulting in comparable, or better, performance.

  • 1630.
    Ögren, Petter
    et al.
    Department of Autonomous Systems Swedish Defence Research Agency.
    Winstrand, Maja
    Minimizing Mission Risk in Fuel Constrained UAV Path Planning2008In: Journal of Guidance Control and Dynamics, ISSN 0731-5090, E-ISSN 1533-3884, Vol. 31, no 5, p. 1497-1500Article in journal (Refereed)
  • 1631.
    Öktem, Ozan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Chen, Chong
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.). Chinese Academy of Sciences, China.
    Onur Domaniç, N.
    Ravikumar, P.
    Bajaj, C.
    Shape-based image reconstruction using linearized deformations2017In: Inverse Problems, ISSN 0266-5611, E-ISSN 1361-6420, Vol. 33, no 3, article id 035004Article in journal (Refereed)
    Abstract [en]

    We introduce a reconstruction framework that can account for shape related prior information in imaging-related inverse problems. It is a variational scheme that uses a shape functional, whose definition is based on deformable template machinery from computational anatomy. We prove existence and, as a proof of concept, we apply the proposed shape-based reconstruction to 2D tomography with very sparse and/or highly noisy measurements.

  • 1632.
    Örtenberg, Alexander
    et al.
    Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Medicine and Health Sciences.
    Magnusson, Maria
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering. Linköping University, Faculty of Medicine and Health Sciences.
    Sandborg, Michael
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Radiation Physics.
    Alm Carlsson, Gudrun
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Radiation Physics.
    Malusek, Alexandr
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Medical and Health Sciences, Division of Radiological Sciences. Linköping University, Faculty of Medicine and Health Sciences.
    PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA2016In: Radiation Protection Dosimetry, ISSN 0144-8420, E-ISSN 1742-3406, Vol. 169, no 1-4, p. 405-409Article in journal (Refereed)
    Abstract [en]

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code’s execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained.

  • 1633.
    Östlund, C
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Biology, Department of Ecology and Evolution, Limnology. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Flink, P
    Uppsala University, Disciplinary Domain of Science and Technology, Biology, Department of Ecology and Evolution. Uppsala University, Disciplinary Domain of Science and Technology, Biology, Department of Ecology and Evolution, Limnology.
    Strömbeck, N
    Pierson, D
    Lindell, T
    Mapping of the water quality of Lake Erken, Sweden, from Imaging Spectrometry and Landsat Thematic Mapper2001In: Science of the Total Environment, ISSN 0048-9697, E-ISSN 1879-1026, Vol. 268, no 1-3, p. 139-154Article in journal (Refereed)
    Abstract [en]

    Hyperspectral data have been collected by the Compact Airborne Spectrographic Imager (CASI) and multispectral data by the Landsat Thematic Mapper (TM) instrument for the purpose of mapping lake water quality. Field campaigns have been performed on Lake Erken

  • 1634.
    Östlund, Catherine
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Analysis of Imaging Spectrometer Data with Lake Environment Applications1999Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis the image processing and analysis aspects of imaging spectrometer (IS) data have been investigated for water and wetland applications. The Compact Airborne Spectrographic Imager (CASI) has been the main instrument in the evaluations. To fully benefit from the high spectral and spatial resolution data in the analysis phase, the preprocessing of data, is important and has been a focus of this thesis. To restore, improve and evaluate the data, the radiometric calibration, wavelength band positioning, noise and other radiometric anomalies, geometric calibration and atmospheric calibration have been studied. Existing methods have been evaluated, and new ones proposed, and the most appropriate methods applied to the data.

    On the image analysis aspects of hyperspectral data sets, spatial true physical structures in the images were studied using data compression and segmentation methods, and a new technique combining compression and colour transformation. The latter was shown to be a fast and objective method to visualise the spatial structures in a large data set.

    The usefulness of IS data in water quality applications was evaluated developing statistical relationships between image data and data collected in the field. A comprehensive in situ data set, collected along a transect in Lake Erken, Sweden, during a bloom of the cyanobacteria Gloeotrichia echinulata was used. It was found that a correlation of the image data to chlorophyll a and phaeophytine a could be established, but also that the preprocessing of images is important, and that the dynamic character of water is a complicating factor. Aquatic macrophytes in Lake Mälaren, Sweden, were classified. IS data was found to be powerful for these kinds of applications, but the analysis suffered from poor data.

  • 1635.
    Ćurić, Vladimir
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Distance Functions and Their Use in Adaptive Mathematical Morphology2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    One of the main problems in image analysis is a comparison of different shapes in images. It is often desirable to determine the extent to which one shape differs from another. This is usually a difficult task because shapes vary in size, length, contrast, texture, orientation, etc. Shapes can be described using sets of points, crisp of fuzzy. Hence, distance functions between sets have been used for comparing different shapes.

    Mathematical morphology is a non-linear theory related to the shape or morphology of features in the image, and morphological operators are defined by the interaction between an image and a small set called a structuring element. Although morphological operators have been extensively used to differentiate shapes by their size, it is not an easy task to differentiate shapes with respect to other features such as contrast or orientation. One approach for differentiation on these type of features is to use data-dependent structuring elements.

    In this thesis, we investigate the usefulness of various distance functions for: (i) shape registration and recognition; and (ii) construction of adaptive structuring elements and functions.

    We examine existing distance functions between sets, and propose a new one, called the Complement weighted sum of minimal distances, where the contribution of each point to the distance function is determined by the position of the point within the set. The usefulness of the new distance function is shown for different image registration and shape recognition problems. Furthermore, we extend the new distance function to fuzzy sets and show its applicability to classification of fuzzy objects.

    We propose two different types of adaptive structuring elements from the salience map of the edge strength: (i) the shape of a structuring element is predefined, and its size is determined from the salience map; (ii) the shape and size of a structuring element are dependent on the salience map. Using this salience map, we also define adaptive structuring functions. We also present the applicability of adaptive mathematical morphology to image regularization. The connection between adaptive mathematical morphology and Lasry-Lions regularization of non-smooth functions provides an elegant tool for image regularization.

  • 1636.
    Šarić, Marin
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragić, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Dimensionality Reduction via Euclidean Distance Embeddings2011Report (Other academic)
    Abstract [en]

    This report provides a mathematically thorough review and investigation of Metric Multidimensional scaling (MDS) through the analysis of Euclidean distances in input and output spaces. By combining a geometric approach with modern linear algebra and multivariate analysis, Metric MDS is viewed as a Euclidean distance embedding transformation that converts between coordinate and coordinate-free representations of data. In this work we link Mercer kernel functions, data in infinite-dimensional Hilbert space and coordinate-free distance metrics to a finite-dimensional Euclidean representation. We further set a foundation for a principled treatment of non-linear extensions of MDS as optimization programs on kernel matrices and Euclidean distances.

30313233 1601 - 1636 of 1636
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf