Change search
Refine search result
303132333435 1601 - 1650 of 1716
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1601.
    Wiberg, Viktor
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Terrain machine learning: A predictive method for estimating terrain model parameters using simulated sensors, vehicle and terrain2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Predicting terrain trafficability of deformable terrain is a difficult task with applications in e.g, forestry, agriculture, exploratory missions. The currently used techniques are neither practical, efficient, nor sufficiently accurate and inadequate for certain soil types. An online method which predicts terrain trafficability is of interest for any vehicle with purpose to reduce ground damage, improve steering and increase mobility. This thesis presents a novel approach for predicting the model parameters used in modelling a virtual terrain. The model parameters include particle stiffness, tangential friction, rolling resistance and two parameters related to particle plasticity and adhesion. Using multi-body dynamics, both vehicle and terrain can be simulated, which allows for an efficient exploration of a great variety of terrains. A vehicle with access to certain sensors can frequently gather sensor data providing information regarding vehicle-terrain interaction. The proposed method develops a statistical model which uses the sensor data in predicting the terrain model parameters. However, these parameters are specified at model particle level and do not directly explain bulk properties measurable on a real terrain. Simulations were carried out of a single tracked bogie constrained to move in one direction when traversing flat, homogeneous terrains. The statistical model with best prediction accuracy was ridge regression using polynomial features and interaction terms of second degree. The model proved capable of predicting particle stiffness, tangential friction and particle plasticity, with moderate accuracy. However, it was deduced that the current predictors and training scenarios were insufficient in estimating particle adhesion and rolling resistance. Nevertheless, this thesis indicates that it should be possible to develop a method which successfully predicts terrain model properties.

  • 1602.
    Widebäck West, Nikolaus
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Multiple Session 3D Reconstruction using RGB-D Cameras2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis we study the problem of multi-session dense rgb-d slam for 3D reconstruc- tion. Multi-session reconstruction can allow users to capture parts of an object that could not easily be captured in one session, due for instance to poor accessibility or user mistakes. We first present a thorough overview of single-session dense rgb-d slam and describe the multi-session problem as a loosening of the incremental camera movement and static scene assumptions commonly held in the single-session case. We then implement and evaluate sev- eral variations on a system for doing two-session reconstruction as an extension to a single- session dense rgb-d slam system.

    The extension from one to several sessions is divided into registering separate sessions into a single reference frame, re-optimizing the camera trajectories, and fusing together the data to generate a final 3D model. Registration is done by matching reconstructed models from the separate sessions using one of two adaptations on a 3D object detection pipeline. The registration pipelines are evaluated with many different sub-steps on a challenging dataset and it is found that robust registration can be achieved using the proposed methods on scenes without degenerate shape symmetry. In particular we find that using plane matches between two sessions as constraints for as much as possible of the registration pipeline improves results.

    Several different strategies for re-optimizing camera trajectories using data from both ses- sions are implemented and evaluated. The re-optimization strategies are based on re- tracking the camera poses from all sessions together, and then optionally optimizing over the full problem as represented on a pose-graph. The camera tracking is done by incrementally building and tracking against a tsdf volume, from which a final 3D mesh model is extracted. The whole system is qualitatively evaluated against a realistic dataset for multi-session re- construction. It is concluded that the overall approach is successful in reconstructing objects from several sessions, but that other fine grained registration methods would be required in order to achieve multi-session reconstructions that are indistinguishable from singe-session results in terms of reconstruction quality. 

  • 1603.
    Wikander, Gustav
    Linköping University, Department of Electrical Engineering.
    Three dimensional object recognition for robot conveyor picking2009Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Shape-based matching (SBM) is a method for matching objects in greyscale images. It extracts edges from search images and matches them to a model using a similarity measure. In this thesis we extend SBM to find the tilt and height position of the object in addition to the z-plane rotation and x-y-position. The search is conducted using a scale pyramid to improve the search speed. A 3D matching can be done for small tilt angles by using SBM on height data and extending it with additional steps to calculate the tilt of the object. The full pose is useful for picking objects with an industrial robot.

    The tilt of the object is calculated using a RANSAC plane estimator. After the 2D search the differences in height between all corresponding points of the model and the live image are calculated. By estimating a plane to this difference the tilt of the object can be calculated. Using the tilt the model edges are tilted in order to improve the matching at the next scale level.

    The problems that arise with occlusion and missing data have been studied. Missing data and erroneous data have been thresholded manually after conducting tests where automatic filling of missing data did not noticeably improve the matching. The automatic filling could introduce new false edges and remove true ones, thus lowering the score.

    Experiments have been conducted where objects have been placed at increasing tilt angles. The results show that the matching algorithm is object dependent and correct matches are almost always found for tilt angles less than 10 degrees. This is very similar to the original 2D SBM because the model edges does not change much for such small angels. For tilt angles up to about 25 degrees most objects can be matched and for nice objects correct matches can be done at large tilt angles of up to 40 degrees.

  • 1604.
    Wilkinson, Tomas
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Brun, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    A novel word segmentation method based on object detection and deep learning2015In: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, p. 231-240Conference paper (Refereed)
    Abstract [en]

    The segmentation of individual words is a crucial step in several data mining methods for historical handwritten documents. Examples of applications include visual searching for query words (word spotting) and character-by-character text recognition. In this paper, we present a novel method for word segmentation that is adapted from recent advances in computer vision, deep learning and generic object detection. Our method has unique capabilities and it has found practical use in our current research project. It can easily be trained for different kinds of historical documents, uses full gray scale information, does not require binarization as pre-processing or prior segmentation of individual text lines. We evaluate its performance using established error metrics, previously used in competitions for word segmentation, and demonstrate its usefulness for a 15th century handwritten document.

  • 1605.
    Wilkinson, Tomas
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Brun, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Semantic and Verbatim Word Spotting using Deep Neural Networks2016In: Proceedings Of 2016 15Th International Conference On Frontiers In Handwriting Recognition (Icfhr), 2016, p. 307-312Conference paper (Refereed)
    Abstract [en]

    In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform wordspotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.

  • 1606.
    Wilkinson, Tomas
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Brun, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Visualizing document image collections using image-based word clouds2015In: Advances in Visual Computing: 11th International Symposium, ISVC 2015, Las Vegas, NV, USA, December 14-16, 2015, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; Pavlidis, I; Feris, R; McGraw, T; Elendt, M; Kopper, R; Ragan, E; Ye, Z; Weber, G, Springer, 2015, p. 297-306Conference paper (Refereed)
  • 1607.
    Wilkinson, Tomas
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Lindström, Jonas
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of History.
    Brun, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Neural Ctrl-F: Segmentation-free Query-by-String Word Spotting in Handwritten Manuscript Collections2017Conference paper (Other academic)
  • 1608.
    Wilkinson, Tomas
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Lindström, Jonas
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Arts, Department of History.
    Brun, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Neural Ctrl-F: Segmentation-free query-by-string word spotting in handwritten manuscript collections2017In: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, p. 4443-4452Conference paper (Refereed)
    Abstract [en]

    In this paper, we approach the problem of segmentation-free query-by-string word spotting for handwritten documents. In other words, we use methods inspired from computer vision and machine learning to search for words in large collections of digitized manuscripts. In particular, we are interested in historical handwritten texts, which are often far more challenging than modern printed documents. This task is important, as it provides people with a way to quickly find what they are looking for in large collections that are tedious and difficult to read manually. To this end, we introduce an end-to-end trainable model based on deep neural networks that we call Ctrl-F-Net. Given a full manuscript page, the model simultaneously generates region proposals, and embeds these into a distributed word embedding space, where searches are performed. We evaluate the model on common benchmarks for handwritten word spotting, outperforming the previous state-of-the-art segmentation-free approaches by a large margin, and in some cases even segmentation-based approaches. One interesting real-life application of our approach is to help historians to find and count specific words in court records that are related to women's sustenance activities and division of labor. We provide promising preliminary experiments that validate our method on this task.

  • 1609. Wiltschi, Klaus
    et al.
    Pinz, Axel
    Lindeberg, Tony
    KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    Classification of Carbide Distributions using Scale Selection and Directional Distributions1997In: Proc. 4th International Conference on Image Processing: ICIP'97, 1997, Vol. II, p. 122-125Conference paper (Refereed)
    Abstract [en]

    This paper presents an automatic system for steel quality assessment, by measuring textural properties of carbide distributions. In current steel inspection, specially etched and polished steel specimen surfaces are classified manually under a light microscope, by comparisons with a standard chart. This procedure is basically two-dimensional, reflecting the size of the carbide agglomerations and their directional distribution. To capture these textural properties in terms of image features, we first apply a rich set of image-processing operations, including mathematical morphology, multi-channel Gabor filtering, and the computation of texture measures with automatic scale selection in linear scale-space. Then, a feature selector is applied to a 40-dimensional feature space, and a classification scheme is defined, which on a sample set of more than 400 images has classification performance values comparable to those of human metallographers. Finally, a fully automatic inspection system is designed, which actively selects the most salient carbide structure on the specimen surface for subsequent classification. The feasibility of the overall approach for future use in the production process is demonstrated by a prototype system. It is also shown how the presented classification scheme allows for the definition of a new reference chart in terms of quantitative measures.

  • 1610.
    Winkler Pettersson, Lars
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Human-Computer Interaction.
    Kjellin, Andreas
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Information Science, Human-Computer Interaction.
    Lind, Mats
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Information Science, Human-Computer Interaction.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Evaluating Collaborative Visualization of Spatial Data in Multi-Viewer Displays2008Manuscript (preprint) (Other academic)
  • 1611.
    Winkler Pettersson, Lars
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Human-Computer Interaction.
    Kjellin, Andreas
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Informatics and Media, Human-Computer Interaction.
    Lind, Mats
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Informatics and Media, Human-Computer Interaction.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    On the role of visual references in collaborative visualization2010In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 9, no 2, p. 98-114Article in journal (Refereed)
  • 1612.
    Winkler Pettersson, Lars
    et al.
    Informationsteknologi, Uppsala universitet.
    Kjellin, Andreas
    Informationsvetenskap, Uppsala universitet.
    Lind, Mats
    Informationsvetenskap, Uppsala universitet.
    Seipel, Stefan
    Informationsvetenskap, Uppsala universitet.
    On the role of visual references in collaborative visualization2009In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 9, no 2, p. 98-114Article in journal (Refereed)
    Abstract [en]

    Multi-Viewer Display Environments (MVDE) provide unique opportunities to present personalized information to several users concurrently in the same physical display space. MVDEs can support correct 3D visualizations to multiple users, present correctly oriented text and symbols to all viewers and allow individually chosen subsets of information in a shared context. MVDEs aim at supporting collaborative visual analysis, and when used to visualize disjoint information in partitioned visualizations they even necessitate collaboration. When solving visual tasks collaboratively in a MVDE, overall performance is affected not only by the inherent effects of the graphical presentation but also by the interaction between the collaborating users.

    We present results from an empirical study where we compared views with lack of shared visual references in disjoint sets of information to views with mutually shared information. Potential benefits of 2D and 3D visualizations in a collaborative task were investigated and the effects of partitioning visualizations both in terms of task performance, interaction behavior and clutter reduction. In our study of a collaborative task that required only a minimum of information to be shared, we found that partitioned views with a lack of shared visual references were significantly less efficient than integrated views. However, the study showed that subjects were equally capable of solving the task at low error levels in partitioned and integrated views. An explorative analysis revealed that the amount of visual clutter was reduced heavily in partitioned visualization, whereas verbal and deictic communication between subjects increased. It also showed that the type of the visualization (2D/3D) affects interaction behavior strongly. An interesting result is that collaboration on complex geo-time visualizations is actually as efficient in 2D as in 3D.

  • 1613.
    Winkler Pettersson, Lars
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Human-Computer Interaction.
    Seipel, Stefan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Collaborative Pixel-Accurate Interaction with PixelActiveSurface2007Conference paper (Other academic)
  • 1614.
    Wood, John
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Statistical Background Models with Shadow Detection for Video Based Tracking2007Independent thesis Basic level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A common problem when using background models to segment moving objects from video sequences is that objects cast shadow usually significantly differ from the background and therefore get detected as foreground. This causes several problems when extracting and labeling objects, such as object shape distortion and several objects merging together. The purpose of this thesis is to explore various possibilities to handle this problem.

    Three methods for statistical background modeling are reviewed. All methods work on a per pixel basis, the first is based on approximating the median, the next on using Gaussian mixture models, and the last one is based on channel representation. It is concluded that all methods detect cast shadows as foreground.

    A study of existing methods to handle cast shadows has been carried out in order to gain knowledge on the subject and get ideas. A common approach is to transform the RGB-color representation into a representation that separates color into intensity and chromatic components in order to determine whether or not newly sampled pixel-values are related to the background. The color spaces HSV, IHSL, CIELAB, YCbCr, and a color model proposed in the literature (Horprasert et al.) are discussed and compared for the purpose of shadow detection. It is concluded that Horprasert's color model is the most suitable for this purpose.

    The thesis ends with a proposal of a method to combine background modeling using Gaussian mixture models with shadow detection using Horprasert's color model. It is concluded that, while not perfect, such a combination can be very helpful in segmenting objects and detecting their cast shadow.

  • 1615.
    Wretstam, Oskar
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Infrared image-based modeling and rendering2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Image based modeling using visual images has undergone major development during the earlier parts of the 21th century. In this thesis a system for automated uncalibrated scene reconstruction using infrared images is implemented and tested. An automated reconstruction system could serve to simplify thermal inspection or as a demonstration tool. Thermal images will in general have lower resolution, less contrast and less high frequency content as compared to visual images. These characteristics of infrared images further complicates feature extraction and matching, key steps in the reconstruction process. In order to remedy the complication preprocessing methods are suggested and tested as well. Infrared modeling will also impose additional demands on the reconstruction as it is of importance to maintain thermal accuracy of the images in the product. Three main results are obtained from this thesis. Firstly, it is possible to obtain camera calibration and pose as well as a sparse point cloud reconstruction from an infrared image sequence using the suggested implementation. Secondly, correlation of thermal measurements from the images used to reconstruct three dimensional coordinates is presented and analyzed. Lastly, from the preprocessing evaluation it is concluded that the tested methods are not suitable. The methods will increase computational cost while improvements in the model are not proportional.

  • 1616. Wyatt, Jeremy L.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Brenner, Michael
    Hanheide, Marc
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristan, Matej
    Kruijff, Geert-Jan M.
    Lison, Pierre
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vrecko, Alen
    Zender, Hendrik
    Zillich, Michael
    Skocaj, Danijel
    Self-Understanding and Self-Extension: A Systems and Representational Approach2010In: IEEE T AUTON MENT DE, ISSN 1943-0604, Vol. 2, no 4, p. 282-303Article in journal (Refereed)
    Abstract [en]

    There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.

  • 1617.
    Wzorek, Mariusz
    Linköping University, Department of Computer and Information Science, UASTECH - Autonomous Unmanned Aircraft Systems Technologies. Linköping University, The Institute of Technology.
    Selected Aspects of Navigation and Path Planning in Unmanned Aircraft Systems2011Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Unmanned aircraft systems (UASs) are an important future technology with early generations already being used in many areas of application encompassing both military and civilian domains. This thesis proposes a number of integration techniques for combining control-based navigation with more abstract path planning functionality for UASs. These techniques are empirically tested and validated using an RMAX helicopter platform used in the UASTechLab at Linköping University. Although the thesis focuses on helicopter platforms, the techniques are generic in nature and can be used in other robotic systems.

    At the control level a navigation task is executed by a set of control modes. A framework based on the abstraction of hierarchical concurrent state machines for the design and development of hybrid control systems is presented. The framework is used to specify  reactive behaviors and for sequentialisation of control modes. Selected examples of control systems deployed on UASs are presented. Collision-free paths executed at the control level are generated by path planning algorithms.We propose a path replanning framework extending the existing path planners to allow dynamic repair of flight paths when new obstacles or no-fly zones obstructing the current flight path are detected. Additionally, a novel approach to selecting the best path repair strategy based on machine learning technique is presented. A prerequisite for a safe navigation in a real-world environment is an accurate geometrical model. As a step towards building accurate 3D models onboard UASs initial work on the integration of a laser range finder with a helicopter platform is also presented.

    Combination of the techniques presented provides another step towards building comprehensive and robust navigation systems for future UASs.

  • 1618.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Algorithms for Applied Digital Image Cytometry2003Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Image analysis can provide genetic as well as protein level information from fluorescence stained fixed or living cells without loosing tissue morphology. Analysis of spatial, spectral, and temporal distribution of fluorescence can reveal important information on the single cell level. This is in contrast to most other methods for cell analysis, which do not account for inter-cellular variation. Flow cytometry enables single-cell analysis, but tissue morphology is lost in the process, and temporal events cannot be observed.

    The need for reproducibility, speed and accuracy calls for computerized methods for cell image analysis, i.e., digital image cytometry, which is the topic of this thesis.

    Algorithms for cell-based screening are presented and applied to evaluate the effect of insulin on translocation events in single cells. This type of algorithms could be the basis for high-throughput drug screening systems, and have been developed in close cooperation with biomedical industry.

    Image based studies of cell cycle proteins in cultured cells and tissue sections show that cyclin A has a well preserved expression pattern while the expression pattern of cyclin E is disturbed in tumors. The results indicate that analysis of cyclin E expression provides additional valuable information for cancer prognosis, not visible by standard tumor grading techniques.

    Complex chains of events and interactions can be visualized by simultaneous staining of different proteins involved in a process. A combination of image analysis and staining procedures that allow sequential staining and visualization of large numbers of different antigens in single cells is presented. Preliminary results show that at least six different antigens can be stained in the same set of cells.

    All image cytometry requires robust segmentation techniques. Clustered objects, background variation, as well as internal intensity variations complicate the segmentation of cells in tissue. Algorithms for segmentation of 2D and 3D images of cell nuclei in tissue by combining intensity, shape, and gradient information are presented.

    The algorithms and applications presented show that fast, robust, and automatic digital image cytometry can increase the throughput and power of image based single cell analysis.

  • 1619.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    High throughput phenotyping of model organisms2012In: BioImage Informatics 2012 / [ed] Fuhui Long, Ivo F. Sbalzarini, Pavel Tomancak and Michael Unser, Dresden, Germany, 2012, p. 45-45Conference paper (Refereed)
    Abstract [en]

    Microscopy has emerged as one of the most powerful and informative ways to analyze cell-based high-throughput screening samples in experiments designed to uncover novel drugs and drug targets. However, many diseases and biological pathways can be better studied in whole animals – particularly diseases that involve organ systems and multi-cellular interactions, such as metabolism, infection, vascularization, and development. Two model organisms compatible with high-throughput phenotyping are the 1mm long round worm C. elegans and the transparent embryo of zebrafish (Danio rerio). C. elegans is tractable as it can be handled using similar robotics, multi-well plates, and flow-sorting systems as are used for high-throughput screening of cells. The worm is also transparent throughout its lifecycle and is attractive as a model for genetic functions as its genes can be turned off by RNA-interference. Zebrafish embryos have also proved to be a vital model organism in many fields of research, including organismal development, cancer, and neurobiology. Zebrafish, being vertebrates, exhibit features common to phylogenetically higher organisms such as a true vasculature and central nervous system.

     

    Basically any phenotypic change that can be visually observed (in untreated or stained worms and fish) can also be imaged. However, visual assessment of phenotypic variation is tedious and prone to error as well as observer bias. Screening in high throughput limits image resolution and time-lapse information. Still, the images are typically rich in information and the number of images for a standard screen often exceeds 100 000, ruling out visual inspection. Generation of automated image analysis platforms will increase the throughout of data analysis, improve the robustness of phenotype scoring, and allow for reliable application of statistical metrics for evaluating assay performance and identifying active compounds.

     

    We have developed a platform for automated analysis of C. elegans assays, and are currently developing tools for analysis of zebrafish embryos. Our worm analysis tools, collected in the WormToolbox, can identify individual worms also as they cross and overlap, and quantify a large number of features, including mapping of reporter protein expression patterns to the worm anatomy. We have evaluated the tools on screens for novel treatments of infectious disease and genetic perturbations affecting fat metabolism. The WormToolbox is part of the free and open source CellProfiler software, also including methods for image assay quality control and feature selection by machine learning.

  • 1620.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Genetics and Pathology. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Image analysis in fluorescence microscopy: the human eye is not enough2008In: Medicinteknikdagar 2008: Mötesplats för aktörer inom forskning, sjukvård och industri, Proceedings, Medicinteknikdagarna 2008: Nils Löfgren, Högskolan i Borås , 2008Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

     

     

     

  • 1621.
    Wählby, Carolina
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Segmentation of cell nuclei in tissue by combining seeded watersheds with gradient information2003In: Proceedings of SCIA-03: Scandinavian Conference on Image Analysis, 2003, p. 408-414Conference paper (Refereed)
    Abstract [en]

    This paper deals with the segmentation of cell nuclei in tissue. We present a region-based segmentation method where seeds representing object- and background-pixels are created by morphological filtering. The seeds are then used as a starting point for watershed segmentation of the gradient magnitude of the original image. Over-segmented objects are thereafter merged based on the gradient magnitude between the adjacent objects. The method was tested on a total of 726 cell nuclei in 7 images, and 95% correct segmentation was achieved.

  • 1622.
    Wählby, Carolina
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Watershed techniques for segmentation in image cytometry2003In: Proceedings of the 1st International Cytomics Conference: Newport, Wales, United Kingdom, 2003Conference paper (Other scientific)
  • 1623.
    Wählby, Carolina
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Erlandsson, Fredrik
    Bengtsson, Ewert
    Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Zetterberg, Anders
    Sequential immunofluorescence staining and image analysis for detection of large numbers of antigens in individual cell nuclei2002In: Cytometry, ISSN 0196-4763, Vol. 47, no 1, p. 32-41Article in journal (Refereed)
    Abstract [en]

    Background

    Visualization of more than one antigen by multicolor immunostaining is often desirable or even necessary to explore spatial and temporal relationships of functional significance. Previously presented staining protocols have been limited to the visualization of three or four antigens.

    Methods

    Immunofluorescence staining was performed both on slices of formalin-fixed tissue and on cells microscopy. The primary and secondary antibodies, as well as the fluorophores, were thereafter removed using a combination of denaturation and elution techniques. After removal of the fluorescence stain, a new immunofluorescence staining was performed, visualizing a new set of antigens. The procedure was repeated up to three times. A method for image registration combined with segmentation, extraction of data, and cell classification was developed for efficient and objective analysis of the image data.

    Results

    The results show that immunofluorescence stains in many cases can be repeatedly removed without major effects on the antigenicity of the sample.

    Conclusions

    The concentration of at least six different antigens in each cell can thus be measured semiquantitatively using sequential immunofluorescence staining and the described image analysis techniques. The number of antigens that can be visualized in a single sample is considerably increased by the presented protocol.

  • 1624.
    Wählby, Carolina
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Erlandsson, Fredrik
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Zetterberg, Anders
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Analysis of cells using image data from sequential immunofluorescence staining experiments2001In: 5th Korea-Germany Joint Workshop on Advanced Medical Image Processing, Seoul, Korea, 2001Conference paper (Other academic)
  • 1625.
    Wählby, Carolina
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Erlandsson, Fredrik
    Nyberg, Karl
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Zetterberg, Anders
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Multiple tissue antigen analysis by sequential immunofluorescence staining and multi-dimensional image analysis2001In: Proceedings of SCIA-01 (Scandinavian Conference on Image Analysis), 2001, p. 25-31Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel method for sequential immunofluorescence staining, which, in combination with 3D image registration and segmentation, can be used to increase the number of antigens that can be observed simultaneously in single cells in tissue sections. Visualization of more than one antigen by multicolor immunostaining is often desirable or even necessary, both for quantitative studies and to explore spatial relationships of functional significance. Sequential staining, meaning repeated application and removal of fluorescence markers, greatly increases the number of different antigens that can be visualized and quantified in single cells using digital imaging fluorescence microscopy. Quantification and efficient objective analysis of the image data requires digital image analysis. A method for 3D image registration combined with 2D and 3D segmentation and 4D extraction of data is described.

  • 1626.
    Wählby, Carolina
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Erlandsson, Fredrik
    Zetterberg, Anders
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Multi-dimensional image analysis of sequential immunofluorescence staining2001In: 7th European Society for Analytical Cellular Pathology Congress (ESACP 2001), Caen, France, 2001, p. 61-Conference paper (Other scientific)
  • 1627.
    Wählby, Carolina
    et al.
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Genetics and Pathology. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Karlsson, Patrick
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Henriksson, Sara
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Genetics and Pathology.
    Larsson, Chatarina
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Genetics and Pathology.
    Nilsson, Mats
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Genetics and Pathology.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Finding cells, finding molecules, finding patterns2008In: International Journal of Signal and Imaging Systems Engineering, ISSN 1748-0698, Vol. 1, no 1, p. 11-17Article in journal (Refereed)
    Abstract [en]

    Many modern molecular labelling techniques result in bright point signals. Signals from molecules that are detected directly inside a cell can be captured by fluorescence microscopy. Signals representing different types of molecules may be randomly distributed in the cells or show systematic patterns, indicating that the corresponding molecules have specific, non-random localisations and functions in the cell. Assessing this information requires high speed robust image segmentation followed by signal detection, and finally, pattern analysis. We present and discuss these types of methods and show an example of how the distribution of different variants of mitochondrial DNA can be analysed.

  • 1628.
    Wählby, Carolina
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Karlsson, Patrick
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Thorlin, Thorleif
    Althoff, Karin
    Degerman, Johan
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Gustavsson, Tomas
    Time-lapse microscopy and image analysis for tracking stem cell migration2004In: Proceedings of the Swedish Symposium on Image Analysis SSBA 2004, 2004, p. 118-121Conference paper (Other scientific)
  • 1629.
    Wählby, Carolina
    et al.
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Nyström, Ingela
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Interfaculty Units, Centre for Image Analysis. Teknisk-naturvetenskapliga vetenskapsområdet, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Robust methods for image segmentation and measurements.2003In: Proceedings for Modern Methods for Quantitative Metallography, 2003Conference paper (Refereed)
  • 1630.
    Wählby, Carolina
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Sintorn, Ida-Maria
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Erlandsson, Fredrik
    Borgefors, Gunilla
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Combining intensity, edge, and shape information for 2D and 3D segmentation of cell nuclei in tissue sections2004In: Journal of Microscopy, ISSN 0022-2720, E-ISSN 1365-2818, Vol. 215, no 1, p. 67-76Article in journal (Refereed)
    Abstract [en]

    We present a region-based segmentation method in which seeds representing both object and background pixels are created by combining morphological filtering of both the original image and the gradient magnitude of the image. The seeds are then used as starting points for watershed segmentation of the gradient magnitude image. The fully automatic seeding is done in a generous fashion, so that at least one seed will be set in each foreground object. If more than one seed is placed in a single object, the watershed segmentation will lead to an initial over-segmentation, i.e. a boundary is created where there is no strong edge. Thus, the result of the initial segmentation is further refined by merging based on the gradient magnitude along the boundary separating neighbouring objects. This step also makes it easy to remove objects with poor contrast. As a final step, clusters of nuclei are separated, based on the shape of the cluster. The number of input parameters to the full segmentation procedure is only five. These parameters can be set manually using a test image and thereafter be used on a large number of images created under similar imaging conditions. This automated system was verified by comparison with manual counts from the same image fields. About 90% correct segmentation was achieved for two- as well as three-dimensional images.

  • 1631.
    Wählby (née Linnman), Carolina
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Vondrus, Mikael
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Jarkrans, Torsten
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Björkesten, Lennart
    Automatic cytoplasm segmentation of fluorescence labelled cells2000In: Symposium on Image Analysis - SSAB 2000, 2000, p. 29-32Conference paper (Other academic)
  • 1632.
    Wälivaara, Marcus
    Linköping University, Department of Electrical Engineering, Computer Vision.
    General Object Detection Using Superpixel Preprocessing2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The objective of this master’s thesis work is to evaluate the potential benefit of a superpixel preprocessing step for general object detection in a traffic environment. The various effects of different superpixel parameters on object detection performance, as well as the benefit of including depth information when generating the superpixels are investigated.

    In this work, three superpixel algorithms are implemented and compared, including a proposal for an improved version of the popular Spectral Linear Iterative Clustering superpixel algorithm (SLIC). The proposed improved algorithm utilises a coarse-to-fine approach which outperforms the original SLIC for high-resolution images. An object detection algorithm is also implemented and evaluated. The algorithm makes use of depth information obtained by a stereo camera to extract superpixels corresponding to foreground objects in the image. Hierarchical clustering is then applied, with the segments formed by the clustered superpixels indicating potential objects in the input image.

    The object detection algorithm managed to detect on average 58% of the objects present in the chosen dataset. It performed especially well for detecting pedestrians or other objects close to the car. Altering the density distribution of the superpixels in the image yielded an increase in detection rate, and could be achieved both with or without utilising depth information. It was also shown that the use of superpixels greatly reduces the amount of computations needed for the algorithm, indicating that a real-time implementation is feasible.

  • 1633.
    Xiao, Yi
    et al.
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT, Australia.
    Pham, Tuan D
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT, Australia.
    Jia, Xiuping
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT, Australia.
    Zhou, Xiaobo
    Centre for Biotechnology and Informatics, The Methodist Hospital Research Institute & Cornell University, Houston, TX, USA.
    Yan, Hong
    Department of Electronic Engineering, City University of Hong Kong, Hong Kong.
    Correlation-based cluster-space transform for major adverse cardiac event prediction2010In: IEEE International Conference on Systems Man and Cybernetics (SMC), Institute of Electrical and Electronics Engineers (IEEE), 2010, p. 2003-2007Conference paper (Refereed)
    Abstract [en]

    This paper investigates the affect of variation of patterns in protein profiles to the identification of disease-specific biomarkers. A correlation-based cluster-space transform is applied to mass spectral data for predicting major adverse cardiac events (MACE). Training and testing data are transformed into cluster spaces by correlation distance based clustering, respectively. Data in the testing cluster that falls into a pair of training clusters is classified by a supervised classifier. Experiment results have shown that proteomic spectra of MACE which vary with certain patterns could be separated by the correlation-based clustering. The cluster-space transform allows better classification accuracy than single-clustered class method for separating disease and healthy samples.

  • 1634.
    Yan, Jeff
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Bourquard, Aurelien
    MIT, MA 02139 USA.
    POSTER: Who was Behind the Camera? - Towards Some New Forensics2017In: CCS17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASSOC COMPUTING MACHINERY , 2017, p. 2595-2597Conference paper (Refereed)
    Abstract [en]

    We motivate a new line of image forensics, and propose a novel approach to photographer identification, a rarely explored authorship attribution problem. A preliminary proof-of-concept study shows the feasibility of our method. Our contribution is a forensic method for photographer de-anonymisation, and the method also imposes a novel privacy threat.

  • 1635. Yan, Xiaoyong
    et al.
    Minnhagen, Petter
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings2015In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, no 5, article id e0125592Article in journal (Refereed)
    Abstract [en]

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.

  • 1636. Yeh, T.
    et al.
    Tollmar, Konrad
    MIT CSAIL, Cambridge.
    Darrell, T.
    Searching the Web with mobile images for location recognition2004In: PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 2, 2004, Vol. 2, p. 76-81Conference paper (Refereed)
    Abstract [en]

    We describe an approach to recognizing location from mobile devices using image-based Web search. We demonstrate the usefulness of common image search metrics applied on images captured with a camera-equipped mobile device to find matching images on the World Wide Web or other general-purpose databases. Searching the entire Web can be computationally overwhelming, so we devise a hybrid image-and-keyword searching technique. First, image-search is performed over images and links to their source Web pages in a database that indexes only a small fraction of the Web. Then, relevant keywords on these Web pages are automatically identified and submitted to an existing text-based search engine (e.g. Google) that indexes a much larger portion of the Web. Finally, the resulting image set is filtered to retain images close to the original query. It is thus possible to efficiently search hundreds of millions of images that are not only textually related but also visually relevant. We demonstrate our approach on an application allowing users to browse Web pages matching the image of a nearby location.

  • 1637. Yeh, Tom
    et al.
    Grauman, Kristen
    Tollmar, Konrad
    Darrell, Trevor
    A picture is worth a thousand keywords: image-based object search on a mobile platform2005In: CHI ’05 extended abstracts on Human factors in computing systems, 2005, p. 2025-2028Conference paper (Refereed)
    Abstract [en]

    Finding information based on an object’s visual appearance is useful when specific keywords for the object are not known. We have developed a mobile image-based search system that takes images of objects as queries and finds relevant web pages by matching them to similar images on the web. Image-based search works well when matching full scenes, such as images of buildings or landmarks, and for matching objects when the boundary of the object in the image is available. We demonstrate the effectiveness of a simple interactive paradigm for obtaining a segmented object boundary, and show how a shape-based image matching algorithm can use the object outline to find similar images on the web.

  • 1638. Yeh, Tom
    et al.
    Tollmar, Konrad
    Darrell, Trevor
    IDeixis: image-based Deixis for finding location-based information2004In: CHI ’04 extended abstracts on Human factors in computing systems, 2004, p. 781-782Conference paper (Refereed)
    Abstract [en]

    We demonstrate an image-based approach to specifying location and finding location-based information from camera-equipped mobile devices. We introduce a point-by-photograph paradigm, where users can specify a location simply by taking pictures. Our technique uses content-based image retrieval methods to search the web or other databases for matching images and their source pages to find relevant location-based information. In contrast to conventional approaches to location detection, our method can refer to distant locations and does not require any physical infrastructure beyond mobile internet service. We have developed a prototype on a camera phone and conducted user studies to demonstrate the efficacy of our approach compared to other alternatives.

  • 1639.
    Yu, Donggang
    et al.
    University of Newcastle, NSW 2308, Australia.
    Jin, Jesse S
    University of Newcastle, NSW 2308, Australia.
    Luo, Suhuai
    University of Newcastle, NSW 2308, Australia.
    Lai, Wei
    University of Technology Hawthorn, VIC3122, Australia.
    Park, Mira
    University of Newcastle, NSW 2308, Australia.
    Pham, Tuan D
    The University of New South Wales Canberra,ACT2600,Australia.
    Shape analysis and recognition based on skeleton and morphological structure2010In: 5th European Conference onColour in Graphics, Imaging, and Vision12th International Symposium onMultispectral Colour Science, 2010, p. 118-123Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel and effective method of shape analysis and recognition based on skeleton and morphological structure. A series of preprocessing algorithms, smooth following and liberalization are introduced, and series of morphological structural points of image contour are extracted and merged. A series of basic shapes and a main shape of object image are described and segmented based on skeleton and morphological structure. Object shape is efficiently analyzed and recognized based on the extracted series of basic shapes and main shape. Comparing with other methods, the proposed method need not sample training set. Also, the new method can be used to analyze and recognize the shape structure of any shape, and there is no any requirement for the processed image data set. The new method can be used in image analysis, intelligent recognition, techniques, applications, systems and tools.

  • 1640.
    Yu, Donggang
    et al.
    University of Newcastle, NSW, Australia.
    Jin, Jesse S
    University of Newcastle, NSW, Australia.
    Luo, Suhuai
    University of Newcastle, NSW, Australia.
    Pham, Tuan D
    The University of New South Wales, Canberra, ACT, Australia.
    Lai, Wei
    Swinburne University of Technology, Hawthorn, VIC, Australia.
    Description, Recognition and Analysis of Biological Images2010In: CP1210, 2009 International Symposium on Computational Models for Life Sciences (CMLS ’09) / [ed] Tuan Pham; Xiaobo Zhou, American Institute of Physics (AIP), 2010, Vol. 1210, p. 23-42Conference paper (Other academic)
    Abstract [en]

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell‐cycle images.

  • 1641.
    Yu, Donggang
    et al.
    Bioinformatics Applications Research Centre, James Cook University, Australia.
    Pham, Tuan D
    Bioinformatics Applications Research Centre, James Cook University, Australia).
    Image Pattern Recognition-Based Morphological Structure and Applications2008In: Pattern recognition technologies and applications : recent advances / [ed] Brijesh Verma; Michael Blumenstein, 2008, p. 48-Chapter in book (Refereed)
    Abstract [en]

    This chapter describes a new pattern recognition method: pattern recognition-based morphological structure. First, smooth following and linearization are introduced based on difference chain codes. Second, morphological structural points are described in terms of smooth followed contours and linearized lines, and then the patterns of morphological structural points and their properties are given. Morphological structural points are basic tools for pattern recognitionbased morphological structure. Furthermore, we discuss how the morphological structure can be used to recognize and classify images. One application is document image processing and recognition, analysis and recognition of broken handwritten digits. Another one is dynamic analysis and recognition of cell-cycle screening based on morphological structures. Finally, a conclusion is given, including advantages, disadvantages, and future research.

  • 1642.
    Yu, Donggang
    et al.
    James Cook University, Townsville, QLD 4811, Australia .
    Pham, Tuan D
    James Cook University, Townsville, QLD 4811, Australia .
    Yan, Hong
    City University of Hong Kong, Kowloon, Hong Kong .
    Lai, Wei
    Swinburne University of Technology, Melborne, VIC 3122, Australia .
    Crane, Denis I
    Griffith University, Nathan, Qld 4111, Australia .
    Segmentation and reconstruction of cultured neuron skeleton2007In: COMPUTATIONAL MODELS FOR LIFE SCIENCES—CMLS’07, 2007, Vol. 952, p. 21-30Conference paper (Refereed)
    Abstract [en]

    One approach to investigating neural death is through systematic studies of the changing morphology of cultured brain neurons in response to cellular challenges. Image segmentation and reconstruction methods developed to date to analyze such changes have been limited by the low contrast of cells. In this paper we present new algorithms that successfully circumvent these problems. The binary method is based on logical analysis of grey and distance difference of images. The spurious regions are detected and removed through use of a hierarchical window filter. The skeletons of binary cell images are extracted. The extension direction and connection points of broken cell skeletons are automatically determined, and broke neural skeletons are reconstructed. The spurious strokes are deleted based on cell prior knowledge. The efficacy of the developed algorithms is demonstrated here through a test of cultured brain neurons from newborn mice.

  • 1643.
    Yu, Lu
    et al.
    Northwestern Polytech Univ, Peoples R China; Univ Autonoma Barcelona, Spain.
    Zhang, Lichao
    Univ Autonoma Barcelona, Spain.
    van de Weijer, Joost
    Univ Autonoma Barcelona, Spain.
    Khan, Fahad
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Cheng, Yongmei
    Northwestern Polytech Univ, Peoples R China.
    Alejandro Parraga, C.
    Univ Autonoma Barcelona, Spain.
    Beyond Eleven Color Names for Image Understanding2018In: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 29, no 2, p. 361-373Article in journal (Refereed)
    Abstract [en]

    Color description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the task of visual tracking, person re-identification and image classification.

  • 1644. Yuan, Qilong
    et al.
    Chen, I-Ming
    Lembono, Teguh Santoso
    Landén, Simon Nelson
    KTH, School of Industrial Engineering and Management (ITM).
    Malmgren, Victor
    KTH, School of Industrial Engineering and Management (ITM).
    Strategy for robot motion and path planning in robot taping2016In: FRONTIERS OF MECHANICAL ENGINEERING, ISSN 2095-0233, Vol. 11, no 2, p. 195-203Article in journal (Refereed)
    Abstract [en]

    Covering objects with masking tapes is a common process for surface protection in processes like spray painting, plasma spraying, shot peening, etc. Manual taping is tedious and takes a lot of effort of the workers. The taping process is a special process which requires correct surface covering strategy and proper attachment of the masking tape for an efficient surface protection. We have introduced an automatic robot taping system consisting of a robot manipulator, a rotating platform, a 3D scanner and specially designed taping end- effectors. This paper mainly talks about the surface covering strategies for different classes of geometries. The methods and corresponding taping tools are introduced for taping of following classes of surfaces: Cylindrical/ extended surfaces, freeform surfaces with no grooves, surfaces with grooves, and rotational symmetrical surfaces. A collision avoidance algorithm is introduced for the robot taping manipulation. With further improvements on segmenting surfaces of taping parts and tape cutting mechanisms, such taping solution with the taping tool and the taping methodology can be combined as a very useful and practical taping package to assist humans in this tedious and time costly work.

  • 1645.
    Zagal, Juan Cristobal
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Björkman, Eva
    Lindeberg, Tony
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Roland, P.
    Signficance determination for the scale-space primal sketch by comparison of statistics of scale-space blob volumes computed from PET signals vs. residual noise2000In: : HBM'00 published in Neuroimage, volume 11, number 5, 2000, 2000, Vol. 11, p. 493-493Conference paper (Refereed)
    Abstract [en]

    A dominant approach to brain mapping is to define functional regions in the brain by analyzing brain activation images obtained by PET or fMRI. In [1], it has been shown that the scale-space primal sketch provides a useful tool for such analysis. Some attractive properties of this method are that it only makes few assumptions about the data and the process for extracting activations is fully automatic.

    In the present version of the scale-space primal sketch, however, there is no method for determining p-values. The purpose here is to present a new methodology for addressing this question, by introducing a descriptor referred to as the -curve, which serves as a first step towards determining the probability of false positives, i.e. alpha.

  • 1646.
    Zhang, Bailing
    et al.
    Xi'an Jiaotong-Liverpool University .
    Pham, Tuan D
    School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT 2600, Australia. A.
    Multiple features based two-stage hybrid classifier ensembles for subcellular phenotype images classification2010In: International Journal of Biometrics and Bioinformatics (IJBB), ISSN 1985-2347, Vol. 4, no 5, p. 176-193Article in journal (Refereed)
    Abstract [en]

    Subcellular localization is a key functional characteristic of proteins. As an interesting ``bio-image informatics\'\' application, an automatic, reliable and efficient prediction system for protein subcellular localization can be used for establishing knowledge of the spatial distribution of proteins within living cells and permits to screen systems for drug discovery or for early diagnosis of a disease. In this paper, we propose a two-stage multiple classifier system to improve classification reliability by introducing rejection option. The system is built as a cascade of two classifier ensembles. The first ensemble consists of set of binary SVMs which generalizes to learn a general classification rule and the second ensemble, which also include three distinct classifiers, focus on the exceptions rejected by the rule. A new way to induce diversity for the classifier ensembles is proposed by designing classifiers that are based on descriptions of different feature patterns. In addition to the Subcellular Location Features (SLF) generally adopted in earlier researches, three well-known texture feature descriptions have been applied to cell phenotype images, which are the local binary patterns (LBP), Gabor filtering and Gray Level Coocurrence Matrix (GLCM). The different texture feature sets can provide sufficient diversity among base classifiers, which is known as a necessary condition for improvement in ensemble performance. Using the public benchmark 2D HeLa cell images, a high classification accuracy 96% is obtained with rejection rate $21\\%$ from the proposed system by taking advantages of the complementary strengths of feature construction and majority-voting based classifiers\' decision fusions.

  • 1647.
    Zhang, Chao
    et al.
    School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia; CSIRO Computational Informatics, North Ryde, Australia.
    Sun, Changming
    CSIRO Computational Informatics, North Ryde, Australia.
    Su, Ran
    Bioinformatics Institute, Matrix, Singapore.
    Pham, Tuan D
    Aizu Research Cluster for Medical Engineering and Informatics, Research Center for Advanced Information Science and Technology, The University of Aizu, Fukushima, Japan.
    Clustered nuclei splitting via curvature information and gray-scale distance transform2015In: Journal of Microscopy, ISSN 0022-2720, E-ISSN 1365-2818, Vol. 259, no 1, p. 36-52Article in journal (Refereed)
    Abstract [en]

    Clusters or clumps of cells or nuclei are frequently observed in two dimensional images of thick tissue sections. Correct and accurate segmentation of overlapping cells and nuclei is important for many biological and biomedical applications. Many existing algorithms split clumps through the binarization of the input images; therefore, the intensity information of the original image is lost during this process. In this paper, we present a curvature information, gray scale distance transform, and shortest path splitting line-based algorithm which can make full use of the concavity and image intensity information to find out markers, each of which represents an individual object, and detect accurate splitting lines between objects using shortest path and junction adjustment. The proposed algorithm is tested on both synthetic and real nuclei images. Experiment results show that the performance of the proposed method is better than that of marker-controlled watershed method and ellipse fitting method.

  • 1648.
    Zhang, Chao
    et al.
    School of Engineering and Information Technology, The University of New South Wales, Canberra ACT, Australia.
    Sun, Changming
    CSIRO Mathematics, Informatics and Statistics, North Ryde, NSW, Australia.
    Su, Ran
    School of Engineering and Information Technology, The University of New South Wales, Canberra ACT, Australia.
    Pham, Tuan D
    Aizu Research Cluster for Medical Engineering and Informatics, Research Center for Advanced Information Science and Technology, The University of Aizu Aizu-Wakamatsu, Fukushima, Japan.
    Segmentation of clustered nuclei based on curvature weighting2012In: IVCNZ '12, Proceedings of the 27th Conference on Image and Vision Computing New Zealand, ACM Digital Library, 2012, p. 49-54Conference paper (Other academic)
    Abstract [en]

    Cluster of nuclei are frequently observed in thick tissue section images. It is very important to segment overlapping nuclei in many biomedical applications. Many existing methods tend to produce under segmented results when there is a high overlap rate. In this paper, we present a curvature weighting based algorithm which weights each pixel using the curvature information of its nearby boundaries to extract markers, each of which represents an object, from input images. Then we use marker-controlled watershed to obtain the final segmentation. Test results using both synthetic and real cell images are presented in the paper.

  • 1649.
    Zhang, Chao
    et al.
    The University of New South Wales, Canberra, ACT, Australia .
    Sun, Changming
    CSIRO Mathematics, Informatics and Statistics, NSW, Australia .
    Su, Ran
    The University of New South Wales, Canberra, ACT, Australia .
    Pham, Tuan D
    The University of Aizu, Fukushima, Japan .
    Segmentation of clustered nuclei based on curvature-weighting2012In: Proceedings of the 27th Conference on Image and Vision Computing New Zealand, 2012, p. 49-54Conference paper (Refereed)
    Abstract [en]

    Clusters of nuclei are frequently observed in thick tissue section images. It is very important to segment overlapping nuclei in many biomedical applications. Many existing methods tend to produce under segmented results when there is a high overlap rate. In this paper, we present a curvature weighting based algorithm which weights each pixel using the curvature information of its nearby boundaries to extract markers, each of which represents an object, from input images. Then we use marker-controlled watershed to obtain the final segmentation. Test results using both synthetic and real cell images are presented in the paper.

  • 1650.
    Zhang, Cheng
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Damianou, Andreas
    The University of Sheffield.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Factorized Topic Models2013Conference paper (Refereed)
    Abstract [en]

    In this paper we present a modification to a latent topic model, which makes themodel exploit supervision to produce a factorized representation of the observeddata. The structured parameterization separately encodes variance that is sharedbetween classes from variance that is private to each class by the introduction of anew prior over the topic space. The approach allows for a more efficient inferenceand provides an intuitive interpretation of the data in terms of an informative signaltogether with structured noise. The factorized representation is shown to enhanceinference performance for image, text, and video classification.

303132333435 1601 - 1650 of 1716
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf